{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Creating Text-Fabric dataset (from LowFat XML trees)\n",
"\n",
"Version: 0.4 (July 25, 2023 - major updates; changing feature names; updated documentation)\n",
"\n",
"## Table of content \n",
"* 1 - Introduction\n",
"* 2 - Read LowFat XML data and store in pickle\n",
" * 2.1 - Import various libraries\n",
" * 2.2 - Initialize global data\n",
" * 2.3 - Add parent info to each node of the XML tree\n",
" * 2.4 - Process the XML data and store dataframe in pickle\n",
"* 3 - Production Text-Fabric from pickle input\n",
" * 3.1 - Load libraries and initialize some data\n",
" * 3.2 - Optionaly export to Excel for investigation"
]
},
{
"cell_type": "markdown",
"metadata": {
"tags": []
},
"source": [
"# 1 - Introduction \n",
"##### [Back to TOC](#TOC)\n",
"\n",
"The source data for the conversion are the LowFat XML trees files representing the macula-greek version of the Nestle 1904 Greek New Testment (British Foreign Bible Society, 1904). The starting dataset is formatted according to Syntax diagram markup by the Global Bible Initiative (GBI). The most recent source data can be found on github https://github.com/Clear-Bible/macula-greek/tree/main/Nestle1904/lowfat. \n",
"\n",
"Attribution: \"MACULA Greek Linguistic Datasets, available at https://github.com/Clear-Bible/macula-greek/\". \n",
"\n",
"The production of the Text-Fabric files consist of two phases. First one is the creation of piclke files (section 2). The second phase is the the actual Text-Fabric creation process (section 3). The process can be depicted as follows:\n",
"\n",
""
]
},
{
"cell_type": "markdown",
"metadata": {
"tags": []
},
"source": [
"# 2 - Read LowFat XML data and store in pickle \n",
"##### [Back to TOC](#TOC)\n",
"\n",
"This script harvests all information from the LowFat tree data (XML nodes), puts it into a Panda DataFrame and stores the result per book in a pickle file. Note: pickling (in Python) is serialising an object into a disk file (or buffer). See also the [Python3 documentation](https://docs.python.org/3/library/pickle.html).\n",
"\n",
"Within the context of this script, the term 'Leaf' refers to nodes that contain the Greek word as data. These nodes are also referred to as 'terminal nodes' since they do not have any children, similar to leaves on a tree. Additionally, Parent1 represents the parent of the leaf, Parent2 represents the parent of Parent1, and so on. For a visual representation, please refer to the following diagram.\n",
"\n",
"\n",
"\n",
"For a full description of the source data see document [MACULA Greek Treebank for the Nestle 1904 Greek New Testament.pdf](https://github.com/Clear-Bible/macula-greek/blob/main/doc/MACULA%20Greek%20Treebank%20for%20the%20Nestle%201904%20Greek%20New%20Testament.pdf)"
]
},
{
"cell_type": "markdown",
"metadata": {
"tags": []
},
"source": [
"## 2.1 - Import various libraries\n",
"##### [Back to TOC](#TOC)"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"ExecuteTime": {
"end_time": "2022-10-28T02:58:14.739227Z",
"start_time": "2022-10-28T02:57:38.766097Z"
}
},
"outputs": [],
"source": [
"import pandas as pd\n",
"import sys\n",
"import os\n",
"import time\n",
"import pickle\n",
"\n",
"import re #regular expressions\n",
"from os import listdir\n",
"from os.path import isfile, join\n",
"import xml.etree.ElementTree as ET\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2.2 - Initialize global data\n",
"##### [Back to TOC](#TOC)\n",
"\n",
"The following global data initializes the script, gathering the XML data to store it into the pickle files.\n",
"\n",
"IMPORTANT: To ensure proper creation of the Text-Fabric files on your system, it is crucial to adjust the values of BaseDir, InputDir, and OutputDir to match the location of the data and the operating system you are using. In this Jupyter Notebook, Windows is the operating system employed."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"BaseDir = 'D:\\\\'\n",
"XmlDir = BaseDir+'xml\\\\'\n",
"PklDir = BaseDir+'pkl\\\\'\n",
"XlsxDir = BaseDir+'xlsx\\\\'\n",
"# note: create output directory prior running this part\n",
"\n",
"# key: filename, [0]=book_long, [1]=book_num, [3]=book_short\n",
"bo2book = {'01-matthew': ['Matthew', '1', 'Matt'],\n",
" '02-mark': ['Mark', '2', 'Mark'],\n",
" '03-luke': ['Luke', '3', 'Luke'],\n",
" '04-john': ['John', '4', 'John'],\n",
" '05-acts': ['Acts', '5', 'Acts'],\n",
" '06-romans': ['Romans', '6', 'Rom'],\n",
" '07-1corinthians': ['I_Corinthians', '7', '1Cor'],\n",
" '08-2corinthians': ['II_Corinthians', '8', '2Cor'],\n",
" '09-galatians': ['Galatians', '9', 'Gal'],\n",
" '10-ephesians': ['Ephesians', '10', 'Eph'],\n",
" '11-philippians': ['Philippians', '11', 'Phil'],\n",
" '12-colossians': ['Colossians', '12', 'Col'],\n",
" '13-1thessalonians':['I_Thessalonians', '13', '1Thess'],\n",
" '14-2thessalonians':['II_Thessalonians','14', '2Thess'],\n",
" '15-1timothy': ['I_Timothy', '15', '1Tim'],\n",
" '16-2timothy': ['II_Timothy', '16', '2Tim'],\n",
" '17-titus': ['Titus', '17', 'Titus'],\n",
" '18-philemon': ['Philemon', '18', 'Phlm'],\n",
" '19-hebrews': ['Hebrews', '19', 'Heb'],\n",
" '20-james': ['James', '20', 'Jas'],\n",
" '21-1peter': ['I_Peter', '21', '1Pet'],\n",
" '22-2peter': ['II_Peter', '22', '2Pet'],\n",
" '23-1john': ['I_John', '23', '1John'],\n",
" '24-2john': ['II_John', '24', '2John'],\n",
" '25-3john': ['III_John', '25', '3John'], \n",
" '26-jude': ['Jude', '26', 'Jude'],\n",
" '27-revelation': ['Revelation', '27', 'Rev']}"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2.3 - Add parent info to each node of the XML tree\n",
"##### [Back to TOC](#TOC)\n",
"\n",
"In order to be able to traverse from the 'leafs' upto the root of the tree, it is required to add information to each node pointing to the parent of each node. The terminating nodes of an XML tree are called \"leaf nodes\" or \"leaves.\" These nodes do not have any child elements and are located at the end of a branch in the XML tree. Leaf nodes contain the actual data or content within an XML document. In contrast, non-leaf nodes are called \"internal nodes,\" which have one or more child elements.\n",
"\n",
"(Attribution: the concept of following functions is taken from https://stackoverflow.com/questions/2170610/access-elementtree-node-parent-node)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"def addParentInfo(et):\n",
" for child in et:\n",
" child.attrib['parent'] = et\n",
" addParentInfo(child)\n",
"\n",
"def getParent(et):\n",
" if 'parent' in et.attrib:\n",
" return et.attrib['parent']\n",
" else:\n",
" return None"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2.4 - Process the XML data and store dataframe in pickle\n",
"##### [Back to TOC](#TOC)\n",
"\n",
"This code processes books in the correct order. Firstly, it parses the XML and adds parent information to each node. Then, it loops through the nodes and checks if it is a 'leaf' node, meaning it contains only one word. If it is a 'leaf' node, the following steps are performed:\n",
"\n",
"* Adds computed data to the 'leaf' nodes in memory.\n",
"* Traverses from the 'leaf' node up to the root and adds information from the parent, grandparent, and so on, to the 'leaf' node.\n",
"* Once it reaches the root, it stops and stores all the gathered information in a dataframe that will be added to the full_dataframe.\n",
"* After processing all the nodes for a specific book, the full_dataframe is exported to a pickle file specific to that book.\n",
"\n",
"Note that this script takes a long time to execute (due to the large number of itterations). However, once the XML data is converted to PKL, there is no need to rerun (unless the source XML data is updated)."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"scrolled": true,
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Processing Matthew at D:\\xml\\01-matthew.xml\n",
"......................................................................................................................................................................................\n",
"Found 18299 items in 67.42706108093262 seconds\n",
"\n",
"Processing Mark at D:\\xml\\02-mark.xml\n",
"................................................................................................................\n",
"Found 11277 items in 42.43761920928955 seconds\n",
"\n",
"Processing Luke at D:\\xml\\03-luke.xml\n",
"..................................................................................................................................................................................................\n",
"Found 19456 items in 225.01683640480042 seconds\n",
"\n",
"Processing John at D:\\xml\\04-john.xml\n",
"............................................................................................................................................................\n",
"Found 15643 items in 57.86876893043518 seconds\n",
"\n",
"Processing Acts at D:\\xml\\05-acts.xml\n",
".......................................................................................................................................................................................\n",
"Found 18393 items in 74.77593684196472 seconds\n",
"\n",
"Processing Romans at D:\\xml\\06-romans.xml\n",
".......................................................................\n",
"Found 7100 items in 26.951738595962524 seconds\n",
"\n",
"Processing I_Corinthians at D:\\xml\\07-1corinthians.xml\n",
"....................................................................\n",
"Found 6820 items in 24.996010541915894 seconds\n",
"\n",
"Processing II_Corinthians at D:\\xml\\08-2corinthians.xml\n",
"............................................\n",
"Found 4469 items in 18.33352518081665 seconds\n",
"\n",
"Processing Galatians at D:\\xml\\09-galatians.xml\n",
"......................\n",
"Found 2228 items in 7.807831287384033 seconds\n",
"\n",
"Processing Ephesians at D:\\xml\\10-ephesians.xml\n",
"........................\n",
"Found 2419 items in 10.39232063293457 seconds\n",
"\n",
"Processing Philippians at D:\\xml\\11-philippians.xml\n",
"................\n",
"Found 1630 items in 5.758781671524048 seconds\n",
"\n",
"Processing Colossians at D:\\xml\\12-colossians.xml\n",
"...............\n",
"Found 1575 items in 6.45435094833374 seconds\n",
"\n",
"Processing I_Thessalonians at D:\\xml\\13-1thessalonians.xml\n",
"..............\n",
"Found 1473 items in 5.037351369857788 seconds\n",
"\n",
"Processing II_Thessalonians at D:\\xml\\14-2thessalonians.xml\n",
"........\n",
"Found 822 items in 2.6990747451782227 seconds\n",
"\n",
"Processing I_Timothy at D:\\xml\\15-1timothy.xml\n",
"...............\n",
"Found 1588 items in 7.33566427230835 seconds\n",
"\n",
"Processing II_Timothy at D:\\xml\\16-2timothy.xml\n",
"............\n",
"Found 1237 items in 5.416724443435669 seconds\n",
"\n",
"Processing Titus at D:\\xml\\17-titus.xml\n",
"......\n",
"Found 658 items in 2.3939695358276367 seconds\n",
"\n",
"Processing Philemon at D:\\xml\\18-philemon.xml\n",
"...\n",
"Found 335 items in 1.0265004634857178 seconds\n",
"\n",
"Processing Hebrews at D:\\xml\\19-hebrews.xml\n",
".................................................\n",
"Found 4955 items in 17.75324273109436 seconds\n",
"\n",
"Processing James at D:\\xml\\20-james.xml\n",
".................\n",
"Found 1739 items in 5.078527212142944 seconds\n",
"\n",
"Processing I_Peter at D:\\xml\\21-1peter.xml\n",
"................\n",
"Found 1676 items in 7.466632127761841 seconds\n",
"\n",
"Processing II_Peter at D:\\xml\\22-2peter.xml\n",
"..........\n",
"Found 1098 items in 4.20117712020874 seconds\n",
"\n",
"Processing I_John at D:\\xml\\23-1john.xml\n",
".....................\n",
"Found 2136 items in 7.3064656257629395 seconds\n",
"\n",
"Processing II_John at D:\\xml\\24-2john.xml\n",
"..\n",
"Found 245 items in 0.6724810600280762 seconds\n",
"\n",
"Processing III_John at D:\\xml\\25-3john.xml\n",
"..\n",
"Found 219 items in 0.4172031879425049 seconds\n",
"\n",
"Processing Jude at D:\\xml\\26-jude.xml\n",
"....\n",
"Found 457 items in 1.5917177200317383 seconds\n",
"\n",
"Processing Revelation at D:\\xml\\27-revelation.xml\n",
"..................................................................................................\n",
"Found 9832 items in 40.72259497642517 seconds\n",
"\n"
]
}
],
"source": [
"# set some globals\n",
"WordOrder=1 # stores the word order as it is found in the XML files (unique number for each word in the full corpus)\n",
"CollectedItems= 0\n",
"\n",
"# process books in order\n",
"for bo, bookinfo in bo2book.items():\n",
" CollectedItems=0\n",
" SentenceNumber=0\n",
" WordGroupNumber=0\n",
" full_df=pd.DataFrame({})\n",
" book_long=bookinfo[0]\n",
" booknum=bookinfo[1]\n",
" book_short=bookinfo[2]\n",
" InputFile = os.path.join(XmlDir, f'{bo}.xml')\n",
" OutputFile = os.path.join(PklDir, f'{bo}.pkl')\n",
" print(f'Processing {book_long} at {InputFile}')\n",
" DataFrameList = []\n",
"\n",
" # Send XML document to parsing process\n",
" tree = ET.parse(InputFile)\n",
" # Now add all the parent info to the nodes in the xtree [important!]\n",
" addParentInfo(tree.getroot())\n",
" start_time = time.time()\n",
" \n",
" # walk over all the XML data\n",
" for elem in tree.iter():\n",
" if elem.tag == 'sentence':\n",
" # add running number to 'sentence' tags\n",
" SentenceNumber+=1\n",
" elem.set('SN', SentenceNumber)\n",
" if elem.tag == 'wg':\n",
" # add running number to 'wg' tags\n",
" WordGroupNumber+=1\n",
" elem.set('WGN', WordGroupNumber)\n",
" if elem.tag == 'w':\n",
" # all nodes containing words are tagged with 'w'\n",
" \n",
" # show progress on screen\n",
" CollectedItems+=1\n",
" if (CollectedItems%100==0): print (\".\",end='')\n",
" \n",
" #Leafref will contain list with book, chapter verse and wordnumber\n",
" Leafref = re.sub(r'[!: ]',\" \", elem.attrib.get('ref')).split()\n",
" \n",
" #push value for word_order to element tree \n",
" elem.set('word_order', WordOrder)\n",
" WordOrder+=1\n",
" \n",
" # add some important computed data to the leaf\n",
" elem.set('LeafName', elem.tag)\n",
" elem.set('word', elem.text)\n",
" elem.set('book_long', book_long)\n",
" elem.set('booknum', int(booknum))\n",
" elem.set('book_short', book_short)\n",
" elem.set('chapter', int(Leafref[1]))\n",
" elem.set('verse', int(Leafref[2]))\n",
" \n",
" # folling code will trace down parents upto the tree and store found attributes\n",
" parentnode=getParent(elem)\n",
" index=0\n",
" while (parentnode):\n",
" index+=1\n",
" elem.set('Parent{}Name'.format(index), parentnode.tag)\n",
" elem.set('Parent{}Type'.format(index), parentnode.attrib.get('type'))\n",
" elem.set('Parent{}Appos'.format(index), parentnode.attrib.get('appositioncontainer'))\n",
" elem.set('Parent{}Class'.format(index), parentnode.attrib.get('class'))\n",
" elem.set('Parent{}Rule'.format(index), parentnode.attrib.get('rule'))\n",
" elem.set('Parent{}Role'.format(index), parentnode.attrib.get('role'))\n",
" elem.set('Parent{}Cltype'.format(index), parentnode.attrib.get('cltype'))\n",
" elem.set('Parent{}Unit'.format(index), parentnode.attrib.get('unit'))\n",
" elem.set('Parent{}Junction'.format(index), parentnode.attrib.get('junction'))\n",
" elem.set('Parent{}SN'.format(index), parentnode.attrib.get('SN'))\n",
" elem.set('Parent{}WGN'.format(index), parentnode.attrib.get('WGN'))\n",
" currentnode=parentnode\n",
" parentnode=getParent(currentnode) \n",
" elem.set('parents', int(index))\n",
" \n",
" #this will add all elements found in the tree to a list of dataframes\n",
" DataFrameChunk=pd.DataFrame(elem.attrib, index={'word_order'})\n",
" DataFrameList.append(DataFrameChunk)\n",
" \n",
" #store the resulting DataFrame per book into a pickle file for further processing\n",
" full_df = pd.concat([df for df in DataFrameList])\n",
"\n",
" output = open(r\"{}\".format(OutputFile), 'wb')\n",
" pickle.dump(full_df, output)\n",
" output.close()\n",
" print(\"\\nFound \",CollectedItems, \" items in %s seconds\\n\" % (time.time() - start_time)) \n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {
"toc": true
},
"source": [
"# 3 - Nestle1904LFT Text-Fabric production from pickle input\n",
"##### [Back to TOC](#TOC)\n",
"\n",
"This script creates the Text-Fabric files by recursive calling the TF walker function.\n",
"API info: https://annotation.github.io/text-fabric/tf/convert/walker.html\n",
"\n",
"The pickle files created by the script in section 2.4 are stored on Github location [/resources/pickle](https://github.com/tonyjurg/Nestle1904LFT/tree/main/resources/pickle)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3.1 - Load libraries and initialize some data\n",
"##### [Back to TOC](#TOC)\n",
"\n",
"The following global data initializes the Text-Fabric conversion script.\n",
"\n",
"IMPORTANT: To ensure the proper creation of the Text-Fabric files on your system, it is crucial to adjust the values of BaseDir, PklDir, etc., to match the location of the data and the operating system you are using. This Jupyter Notebook employs the Windows operating system."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"ExecuteTime": {
"end_time": "2022-10-28T03:01:34.810259Z",
"start_time": "2022-10-28T03:01:25.745112Z"
}
},
"outputs": [],
"source": [
"import pandas as pd\n",
"import os\n",
"import re\n",
"import gc\n",
"from tf.fabric import Fabric\n",
"from tf.convert.walker import CV\n",
"from tf.parameters import VERSION\n",
"from datetime import date\n",
"import pickle\n",
"\n",
"BaseDir = 'D:\\\\'\n",
"XmlDir = BaseDir+'xml\\\\'\n",
"PklDir = BaseDir+'pkl\\\\'\n",
"XlsxDir = BaseDir+'xlsx\\\\'\n",
"\n",
"\n",
"# key: filename, [0]=book_long, [1]=book_num, [3]=book_short\n",
"bo2book = {'01-matthew': ['Matthew', '1', 'Matt'],\n",
" '02-mark': ['Mark', '2', 'Mark'],\n",
" '03-luke': ['Luke', '3', 'Luke'],\n",
" '04-john': ['John', '4', 'John'],\n",
" '05-acts': ['Acts', '5', 'Acts'],\n",
" '06-romans': ['Romans', '6', 'Rom'],\n",
" '07-1corinthians': ['I_Corinthians', '7', '1Cor'],\n",
" '08-2corinthians': ['II_Corinthians', '8', '2Cor'],\n",
" '09-galatians': ['Galatians', '9', 'Gal'],\n",
" '10-ephesians': ['Ephesians', '10', 'Eph'],\n",
" '11-philippians': ['Philippians', '11', 'Phil'],\n",
" '12-colossians': ['Colossians', '12', 'Col'],\n",
" '13-1thessalonians':['I_Thessalonians', '13', '1Thess'],\n",
" '14-2thessalonians':['II_Thessalonians','14', '2Thess'],\n",
" '15-1timothy': ['I_Timothy', '15', '1Tim'],\n",
" '16-2timothy': ['II_Timothy', '16', '2Tim'],\n",
" '17-titus': ['Titus', '17', 'Titus'],\n",
" '18-philemon': ['Philemon', '18', 'Phlm'],\n",
" '19-hebrews': ['Hebrews', '19', 'Heb'],\n",
" '20-james': ['James', '20', 'Jas'],\n",
" '21-1peter': ['I_Peter', '21', '1Pet'],\n",
" '22-2peter': ['II_Peter', '22', '2Pet'],\n",
" '23-1john': ['I_John', '23', '1John'],\n",
" '24-2john': ['II_John', '24', '2John'],\n",
" '25-3john': ['III_John', '25', '3John'], \n",
" '26-jude': ['Jude', '26', 'Jude'],\n",
" '27-revelation': ['Revelation', '27', 'Rev']}\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3.2 - Optionaly export to Excel for investigation\n",
"##### [Back to TOC](#TOC)\n",
"\n",
"This step is optional. It will allow for manual examining the input data to the Text-Fabric conversion script."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\tloading D:\\pkl\\01-matthew.pkl...\n",
"\tloading D:\\pkl\\02-mark.pkl...\n",
"\tloading D:\\pkl\\03-luke.pkl...\n",
"\tloading D:\\pkl\\04-john.pkl...\n",
"\tloading D:\\pkl\\05-acts.pkl...\n",
"\tloading D:\\pkl\\06-romans.pkl...\n",
"\tloading D:\\pkl\\07-1corinthians.pkl...\n",
"\tloading D:\\pkl\\08-2corinthians.pkl...\n",
"\tloading D:\\pkl\\09-galatians.pkl...\n",
"\tloading D:\\pkl\\10-ephesians.pkl...\n",
"\tloading D:\\pkl\\11-philippians.pkl...\n",
"\tloading D:\\pkl\\12-colossians.pkl...\n",
"\tloading D:\\pkl\\13-1thessalonians.pkl...\n",
"\tloading D:\\pkl\\14-2thessalonians.pkl...\n",
"\tloading D:\\pkl\\15-1timothy.pkl...\n",
"\tloading D:\\pkl\\16-2timothy.pkl...\n",
"\tloading D:\\pkl\\17-titus.pkl...\n",
"\tloading D:\\pkl\\18-philemon.pkl...\n",
"\tloading D:\\pkl\\19-hebrews.pkl...\n",
"\tloading D:\\pkl\\20-james.pkl...\n",
"\tloading D:\\pkl\\21-1peter.pkl...\n",
"\tloading D:\\pkl\\22-2peter.pkl...\n",
"\tloading D:\\pkl\\23-1john.pkl...\n",
"\tloading D:\\pkl\\24-2john.pkl...\n",
"\tloading D:\\pkl\\25-3john.pkl...\n",
"\tloading D:\\pkl\\26-jude.pkl...\n",
"\tloading D:\\pkl\\27-revelation.pkl...\n"
]
}
],
"source": [
"# test: sorting the data\n",
"import openpyxl\n",
"import pickle\n",
"\n",
"#if True:\n",
"for bo in bo2book:\n",
" '''\n",
" load all data into a dataframe\n",
" process books in order (bookinfo is a list!)\n",
" ''' \n",
" InputFile = os.path.join(PklDir, f'{bo}.pkl')\n",
" \n",
" print(f'\\tloading {InputFile}...')\n",
" pkl_file = open(InputFile, 'rb')\n",
" df = pickle.load(pkl_file)\n",
" pkl_file.close()\n",
" df.to_excel(os.path.join(XlsxDir, f'{bo}.xlsx'), index=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3.3 - Running the TF walker function\n",
"##### [Back to TOC](#TOC)\n",
"\n",
"API info: https://annotation.github.io/text-fabric/tf/convert/walker.html\n",
"\n",
"Explanatory notes about the data interpretation logic are incorporated within the Python code of the director function."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"This is Text-Fabric 11.4.10\n",
"0 features found and 0 ignored\n",
" 0.00s Not all of the warp features otype and oslots are present in\n",
"D:\n",
" 0.00s Only the Feature and Edge APIs will be enabled\n",
" 0.00s Warp feature \"otext\" not found. Working without Text-API\n",
"\n",
" 0.00s Importing data from walking through the source ...\n",
" | 0.00s Preparing metadata... \n",
" | SECTION TYPES: book, chapter, verse\n",
" | SECTION FEATURES: book, chapter, verse\n",
" | STRUCTURE TYPES: book, chapter, verse\n",
" | STRUCTURE FEATURES: book, chapter, verse\n",
" | TEXT FEATURES:\n",
" | | text-orig-full after, word\n",
" | 0.00s OK\n",
" | 0.00s Following director... \n",
"\tWe are loading D:\\pkl\\01-matthew.pkl...\n",
"\tWe are loading D:\\pkl\\02-mark.pkl...\n",
"\tWe are loading D:\\pkl\\03-luke.pkl...\n",
"\tWe are loading D:\\pkl\\04-john.pkl...\n",
"\tWe are loading D:\\pkl\\05-acts.pkl...\n",
"\tWe are loading D:\\pkl\\06-romans.pkl...\n",
"\tWe are loading D:\\pkl\\07-1corinthians.pkl...\n",
"\tWe are loading D:\\pkl\\08-2corinthians.pkl...\n",
"\tWe are loading D:\\pkl\\09-galatians.pkl...\n",
"\tWe are loading D:\\pkl\\10-ephesians.pkl...\n",
"\tWe are loading D:\\pkl\\11-philippians.pkl...\n",
"\tWe are loading D:\\pkl\\12-colossians.pkl...\n",
"\tWe are loading D:\\pkl\\13-1thessalonians.pkl...\n",
"\tWe are loading D:\\pkl\\14-2thessalonians.pkl...\n",
"\tWe are loading D:\\pkl\\15-1timothy.pkl...\n",
"\tWe are loading D:\\pkl\\16-2timothy.pkl...\n",
"\tWe are loading D:\\pkl\\17-titus.pkl...\n",
"\tWe are loading D:\\pkl\\18-philemon.pkl...\n",
"\tWe are loading D:\\pkl\\19-hebrews.pkl...\n",
"\tWe are loading D:\\pkl\\20-james.pkl...\n",
"\tWe are loading D:\\pkl\\21-1peter.pkl...\n",
"\tWe are loading D:\\pkl\\22-2peter.pkl...\n",
"\tWe are loading D:\\pkl\\23-1john.pkl...\n",
"\tWe are loading D:\\pkl\\24-2john.pkl...\n",
"\tWe are loading D:\\pkl\\25-3john.pkl...\n",
"\tWe are loading D:\\pkl\\26-jude.pkl...\n",
"\tWe are loading D:\\pkl\\27-revelation.pkl...\n",
" | 39s \"edge\" actions: 0\n",
" | 39s \"feature\" actions: 267467\n",
" | 39s \"node\" actions: 129688\n",
" | 39s \"resume\" actions: 9627\n",
" | 39s \"slot\" actions: 137779\n",
" | 39s \"terminate\" actions: 277221\n",
" | 27 x \"book\" node \n",
" | 260 x \"chapter\" node \n",
" | 8011 x \"sentence\" node \n",
" | 7943 x \"verse\" node \n",
" | 113447 x \"wg\" node \n",
" | 137779 x \"word\" node = slot type\n",
" | 267467 nodes of all types\n",
" | 39s OK\n",
" | 0.00s checking for nodes and edges ... \n",
" | 0.00s OK\n",
" | 0.00s checking (section) features ... \n",
" | 0.19s OK\n",
" | 0.00s reordering nodes ...\n",
" | 0.03s Sorting 27 nodes of type \"book\"\n",
" | 0.04s Sorting 260 nodes of type \"chapter\"\n",
" | 0.05s Sorting 8011 nodes of type \"sentence\"\n",
" | 0.07s Sorting 7943 nodes of type \"verse\"\n",
" | 0.09s Sorting 113447 nodes of type \"wg\"\n",
" | 0.20s Max node = 267467\n",
" | 0.20s OK\n",
" | 0.00s reassigning feature values ...\n",
" | | 0.00s node feature \"after\" with 137779 nodes\n",
" | | 0.03s node feature \"appos\" with 113447 nodes\n",
" | | 0.07s node feature \"book\" with 137806 nodes\n",
" | | 0.11s node feature \"booknumber\" with 137806 nodes\n",
" | | 0.14s node feature \"bookshort\" with 137806 nodes\n",
" | | 0.18s node feature \"case\" with 137779 nodes\n",
" | | 0.21s node feature \"chapter\" with 153939 nodes\n",
" | | 0.25s node feature \"clausetype\" with 113447 nodes\n",
" | | 0.29s node feature \"containedclause\" with 137779 nodes\n",
" | | 0.32s node feature \"degree\" with 137779 nodes\n",
" | | 0.35s node feature \"gloss\" with 137779 nodes\n",
" | | 0.39s node feature \"gn\" with 137779 nodes\n",
" | | 0.42s node feature \"junction\" with 113447 nodes\n",
" | | 0.49s node feature \"lemma\" with 137779 nodes\n",
" | | 0.54s node feature \"lex_dom\" with 137779 nodes\n",
" | | 0.59s node feature \"ln\" with 137779 nodes\n",
" | | 0.62s node feature \"monad\" with 137779 nodes\n",
" | | 0.66s node feature \"mood\" with 137779 nodes\n",
" | | 0.69s node feature \"morph\" with 137779 nodes\n",
" | | 0.73s node feature \"nodeID\" with 137779 nodes\n",
" | | 0.76s node feature \"normalized\" with 137779 nodes\n",
" | | 0.80s node feature \"nu\" with 137779 nodes\n",
" | | 0.83s node feature \"number\" with 137779 nodes\n",
" | | 0.87s node feature \"orig_order\" with 137779 nodes\n",
" | | 0.90s node feature \"person\" with 137779 nodes\n",
" | | 0.93s node feature \"ref\" with 137779 nodes\n",
" | | 0.97s node feature \"reference\" with 137779 nodes\n",
" | | 1.00s node feature \"roleclausedistance\" with 137779 nodes\n",
" | | 1.04s node feature \"sentence\" with 137806 nodes\n",
" | | 1.07s node feature \"sp\" with 137779 nodes\n",
" | | 1.10s node feature \"sp_full\" with 137779 nodes\n",
" | | 1.14s node feature \"strongs\" with 137779 nodes\n",
" | | 1.17s node feature \"subj_ref\" with 137779 nodes\n",
" | | 1.21s node feature \"tense\" with 137779 nodes\n",
" | | 1.25s node feature \"type\" with 137779 nodes\n",
" | | 1.30s node feature \"unicode\" with 137779 nodes\n",
" | | 1.33s node feature \"verse\" with 153706 nodes\n",
" | | 1.38s node feature \"voice\" with 137779 nodes\n",
" | | 1.42s node feature \"wgclass\" with 113447 nodes\n",
" | | 1.46s node feature \"wglevel\" with 113447 nodes\n",
" | | 1.49s node feature \"wgnum\" with 113447 nodes\n",
" | | 1.53s node feature \"wgrole\" with 113447 nodes\n",
" | | 1.56s node feature \"wgrolelong\" with 113447 nodes\n",
" | | 1.60s node feature \"wgrule\" with 113447 nodes\n",
" | | 1.64s node feature \"wgtype\" with 113447 nodes\n",
" | | 1.68s node feature \"word\" with 137779 nodes\n",
" | | 1.71s node feature \"wordlevel\" with 137779 nodes\n",
" | | 1.75s node feature \"wordrole\" with 137779 nodes\n",
" | | 1.78s node feature \"wordrolelong\" with 137779 nodes\n",
" | 1.89s OK\n",
" 0.00s Exporting 50 node and 1 edge and 1 config features to D:/:\n",
" 0.00s VALIDATING oslots feature\n",
" 0.02s VALIDATING oslots feature\n",
" 0.02s maxSlot= 137779\n",
" 0.02s maxNode= 267467\n",
" 0.03s OK: oslots is valid\n",
" | 0.13s T after to D:\n",
" | 0.10s T appos to D:\n",
" | 0.13s T book to D:\n",
" | 0.12s T booknumber to D:\n",
" | 0.15s T bookshort to D:\n",
" | 0.13s T case to D:\n",
" | 0.14s T chapter to D:\n",
" | 0.10s T clausetype to D:\n",
" | 0.13s T containedclause to D:\n",
" | 0.12s T degree to D:\n",
" | 0.13s T gloss to D:\n",
" | 0.14s T gn to D:\n",
" | 0.10s T junction to D:\n",
" | 0.16s T lemma to D:\n",
" | 0.13s T lex_dom to D:\n",
" | 0.13s T ln to D:\n",
" | 0.12s T monad to D:\n",
" | 0.13s T mood to D:\n",
" | 0.13s T morph to D:\n",
" | 0.13s T nodeID to D:\n",
" | 0.15s T normalized to D:\n",
" | 0.13s T nu to D:\n",
" | 0.13s T number to D:\n",
" | 0.12s T orig_order to D:\n",
" | 0.05s T otype to D:\n",
" | 0.13s T person to D:\n",
" | 0.13s T ref to D:\n",
" | 0.13s T reference to D:\n",
" | 0.12s T roleclausedistance to D:\n",
" | 0.13s T sentence to D:\n",
" | 0.13s T sp to D:\n",
" | 0.13s T sp_full to D:\n",
" | 0.29s T strongs to D:\n",
" | 0.18s T subj_ref to D:\n",
" | 0.15s T tense to D:\n",
" | 0.14s T type to D:\n",
" | 0.16s T unicode to D:\n",
" | 0.14s T verse to D:\n",
" | 0.13s T voice to D:\n",
" | 0.11s T wgclass to D:\n",
" | 0.10s T wglevel to D:\n",
" | 0.10s T wgnum to D:\n",
" | 0.10s T wgrole to D:\n",
" | 0.10s T wgrolelong to D:\n",
" | 0.11s T wgrule to D:\n",
" | 0.10s T wgtype to D:\n",
" | 0.15s T word to D:\n",
" | 0.12s T wordlevel to D:\n",
" | 0.13s T wordrole to D:\n",
" | 0.13s T wordrolelong to D:\n",
" | 0.47s T oslots to D:\n",
" | 0.00s M otext to D:\n",
" 7.02s Exported 50 node features and 1 edge features and 1 config features to D:\n",
"done\n"
]
}
],
"source": [
"TF = Fabric(locations=BaseDir, silent=False)\n",
"cv = CV(TF)\n",
"\n",
"###############################################\n",
"# Common helper functions #\n",
"###############################################\n",
"\n",
"#Function to prevent errors during conversion due to missing data\n",
"def sanitize(input):\n",
" if isinstance(input, float): return ''\n",
" if isinstance(input, type(None)): return ''\n",
" else: return (input)\n",
"\n",
"\n",
"# Function to expand the syntactic categories of words or wordgroup\n",
"# See also \"MACULA Greek Treebank for the Nestle 1904 Greek New Testament.pdf\" \n",
"# page 5&6 (section 2.4 Syntactic Categories at Clause Level)\n",
"def ExpandRole(input):\n",
" if input==\"adv\": return 'Adverbial'\n",
" if input==\"io\": return 'Indirect Object'\n",
" if input==\"o\": return 'Object'\n",
" if input==\"o2\": return 'Second Object'\n",
" if input==\"s\": return 'Subject'\n",
" if input==\"p\": return 'Predicate'\n",
" if input==\"v\": return 'Verbal'\n",
" if input==\"vc\": return 'Verbal Copula'\n",
" if input=='aux': return 'Auxiliar'\n",
" return ''\n",
"\n",
"# Function to expantion of Part of Speech labels. See also the description in \n",
"# \"MACULA Greek Treebank for the Nestle 1904 Greek New Testament.pdf\" page 6&7\n",
"# (2.2. Syntactic Categories at Word Level: Part of Speech Labels)\n",
"def ExpandSP(input):\n",
" if input=='adj': return 'Adjective'\n",
" if input=='conj': return 'Conjunction'\n",
" if input=='det': return 'Determiner' \n",
" if input=='intj': return 'Interjection' \n",
" if input=='noun': return 'Noun' \n",
" if input=='num': return 'Numeral' \n",
" if input=='prep': return 'Preposition' \n",
" if input=='ptcl': return 'Particle' \n",
" if input=='pron': return 'Pronoun' \n",
" if input=='verb': return 'Verb' \n",
" return ''\n",
"\n",
"###############################################\n",
"# The director routine #\n",
"###############################################\n",
"\n",
"def director(cv):\n",
" \n",
" ###############################################\n",
" # Innitial setup of data etc. #\n",
" ###############################################\n",
" NoneType = type(None) # needed as tool to validate certain data\n",
" IndexDict = {} # init an empty dictionary\n",
" WordGroupDict={} # init a dummy dictionary\n",
" PrevWordGroupSet = WordGroupSet = []\n",
" PrevWordGroupList = WordGroupList = []\n",
" RootWordGroup = 0\n",
" WordNumber=FoundWords=WordGroupTrack=0\n",
" # The following is required to recover succesfully from an abnormal condition\n",
" # in the LowFat tree data where a element is labeled as \n",
" # this number is arbitrary but should be high enough not to clash with 'real' WG numbers\n",
" DummyWGN=200000 \n",
" \n",
" for bo,bookinfo in bo2book.items(): \n",
" \n",
" ###############################################\n",
" # start of section executed for each book #\n",
" ###############################################\n",
" \n",
" # note: bookinfo is a list! Split the data\n",
" Book = bookinfo[0] \n",
" BookNumber = int(bookinfo[1])\n",
" BookShort = bookinfo[2]\n",
" BookLoc = os.path.join(PklDir, f'{bo}.pkl') \n",
" \n",
" \n",
" # load data for this book into a dataframe. \n",
" # make sure wordorder is correct\n",
" print(f'\\tWe are loading {BookLoc}...')\n",
" pkl_file = open(BookLoc, 'rb')\n",
" df_unsorted = pickle.load(pkl_file)\n",
" pkl_file.close()\n",
" \n",
" '''\n",
" Fill dictionary of column names for this book \n",
" sort to ensure proper wordorder\n",
" '''\n",
" ItemsInRow=1\n",
" for itemname in df_unsorted.columns.to_list():\n",
" IndexDict.update({'i_{}'.format(itemname): ItemsInRow})\n",
" # This is to identify the collumn containing the key to sort upon\n",
" if itemname==\"{http://www.w3.org/XML/1998/namespace}id\": SortKey=ItemsInRow-1\n",
" ItemsInRow+=1 \n",
" df=df_unsorted.sort_values(by=df_unsorted.columns[SortKey])\n",
" del df_unsorted\n",
"\n",
" # Set up nodes for new book\n",
" ThisBookPointer = cv.node('book')\n",
" cv.feature(ThisBookPointer, book=Book, booknumber=BookNumber, bookshort=BookShort)\n",
" \n",
" ThisChapterPointer = cv.node('chapter')\n",
" cv.feature(ThisChapterPointer, chapter=1)\n",
" PreviousChapter=1\n",
" \n",
" ThisVersePointer = cv.node('verse')\n",
" cv.feature(ThisVersePointer, verse=1)\n",
" PreviousVerse=1\n",
" \n",
" ThisSentencePointer = cv.node('sentence')\n",
" cv.feature(ThisSentencePointer, sentence=1)\n",
" PreviousSentence=1 \n",
"\n",
"\n",
" ###############################################\n",
" # Iterate through words and construct objects #\n",
" ###############################################\n",
" \n",
" for row in df.itertuples():\n",
" WordNumber += 1\n",
" FoundWords +=1\n",
" \n",
" # Detect and act upon changes in sentences, verse and chapter \n",
" # the order of terminating and creating the nodes is critical: \n",
" # close verse - close chapter - open chapter - open verse \n",
" NumberOfParents = sanitize(row[IndexDict.get(\"i_parents\")])\n",
" ThisSentence=int(row[IndexDict.get(\"i_Parent{}SN\".format(NumberOfParents-1))])\n",
" ThisVerse = sanitize(row[IndexDict.get(\"i_verse\")])\n",
" ThisChapter = sanitize(row[IndexDict.get(\"i_chapter\")])\n",
" \n",
" if (ThisSentence!=PreviousSentence):\n",
" cv.terminate(ThisSentencePointer)\n",
" \n",
" if (ThisVerse!=PreviousVerse):\n",
" cv.terminate(ThisVersePointer)\n",
" \n",
" if (ThisChapter!=PreviousChapter):\n",
" cv.terminate(ThisChapterPointer)\n",
" PreviousChapter = ThisChapter\n",
" ThisChapterPointer = cv.node('chapter')\n",
" cv.feature(ThisChapterPointer, chapter=ThisChapter)\n",
" \n",
" if (ThisVerse!=PreviousVerse):\n",
" PreviousVerse = ThisVerse \n",
" ThisVersePointer = cv.node('verse')\n",
" cv.feature(ThisVersePointer, verse=ThisVerse, chapter=ThisChapter)\n",
" \n",
" if (ThisSentence!=PreviousSentence):\n",
" PreviousSentence=ThisSentence\n",
" ThisSentencePointer = cv.node('sentence')\n",
" cv.feature(ThisSentencePointer, verse=ThisVerse, chapter=ThisChapter) \n",
"\n",
" \n",
" ###############################################\n",
" # analyze and process tags #\n",
" ###############################################\n",
" \n",
" PrevWordGroupList=WordGroupList\n",
" WordGroupList=[] # stores current active WordGroup numbers\n",
"\n",
" for i in range(NumberOfParents-2,0,-1): # important: reversed itteration!\n",
" _WGN=row[IndexDict.get(\"i_Parent{}WGN\".format(i))]\n",
" if isinstance(_WGN, type(None)): \n",
" # handling conditions where XML data has e.g. Acts 26:12\n",
" # to recover, we need to create a dummy WG with a sufficient high WGN so it can never match any real WGN. \n",
" WGN=DummyWGN\n",
" else:\n",
" WGN=int(_WGN)\n",
" if WGN!='':\n",
" WordGroupList.append(WGN)\n",
" WordGroupDict[(WGN,0)]=WGN\n",
" WGclass=sanitize(row[IndexDict.get(\"i_Parent{}Class\".format(i))])\n",
" WGrule=sanitize(row[IndexDict.get(\"i_Parent{}Rule\".format(i))])\n",
" WGtype=sanitize(row[IndexDict.get(\"i_Parent{}Type\".format(i))])\n",
" if WGclass==WGrule==WGtype=='':\n",
" WGclass='to be skipped?'\n",
" if WGrule[-2:]=='CL' and WGclass=='': \n",
" WGclass='cl*' # to simulate the way Logos presents this condition\n",
" WordGroupDict[(WGN,6)]=WGclass\n",
" WordGroupDict[(WGN,1)]=WGrule\n",
" WordGroupDict[(WGN,8)]=WGtype\n",
" WordGroupDict[(WGN,3)]=sanitize(row[IndexDict.get(\"i_Parent{}Junction\".format(i))])\n",
" WordGroupDict[(WGN,2)]=sanitize(row[IndexDict.get(\"i_Parent{}Cltype\".format(i))])\n",
" WordGroupDict[(WGN,7)]=sanitize(row[IndexDict.get(\"i_Parent{}Role\".format(i))])\n",
"\n",
" WordGroupDict[(WGN,9)]=sanitize(row[IndexDict.get(\"i_Parent{}Appos\".format(i))]) \n",
" WordGroupDict[(WGN,10)]=NumberOfParents-1-i # = number of parent wordgroups \n",
" if not PrevWordGroupList==WordGroupList:\n",
" if RootWordGroup != WordGroupList[0]:\n",
" RootWordGroup = WordGroupList[0]\n",
" SuspendableWordGoupList = []\n",
" # we have a new sentence. rebuild suspendable wordgroup list\n",
" # some cleaning of data may be added here to save on memmory... \n",
" #for k in range(6): del WordGroupDict[item,k]\n",
" for item in reversed(PrevWordGroupList):\n",
" if (item not in WordGroupList):\n",
" # CLOSE/SUSPEND CASE\n",
" SuspendableWordGoupList.append(item)\n",
" cv.terminate(WordGroupDict[item,4])\n",
" for item in WordGroupList:\n",
" if (item not in PrevWordGroupList):\n",
" if (item in SuspendableWordGoupList):\n",
" # RESUME CASE\n",
" #print ('\\n resume: '+str(item),end=' ')\n",
" cv.resume(WordGroupDict[(item,4)])\n",
" else:\n",
" # CREATE CASE\n",
" #print ('\\n create: '+str(item),end=' ')\n",
" WordGroupDict[(item,4)]=cv.node('wg')\n",
" WordGroupDict[(item,5)]=WordGroupTrack\n",
" WordGroupTrack += 1\n",
" cv.feature(WordGroupDict[(item,4)], wgnum=WordGroupDict[(item,0)], junction=WordGroupDict[(item,3)], \n",
" clausetype=WordGroupDict[(item,2)], wgrule=WordGroupDict[(item,1)], wgclass=WordGroupDict[(item,6)], \n",
" wgrole=WordGroupDict[(item,7)],wgrolelong=ExpandRole(WordGroupDict[(item,7)]),\n",
" wgtype=WordGroupDict[(item,8)],appos=WordGroupDict[(item,8)],wglevel=WordGroupDict[(item,10)])\n",
"\n",
" \n",
" \n",
" # These roles are performed either by a WG or just a single word.\n",
" Role=row[IndexDict.get(\"i_role\")]\n",
" ValidRoles=[\"adv\",\"io\",\"o\",\"o2\",\"s\",\"p\",\"v\",\"vc\",\"aux\"]\n",
" DistanceToRoleClause=0\n",
" if isinstance (Role,str) and Role in ValidRoles: \n",
" # Role is assign to this word (uniqely)\n",
" WordRole=Role\n",
" WordRoleLong=ExpandRole(WordRole)\n",
" else:\n",
" # Role details needs to be taken from some uptree wordgroup \n",
" WordRole=WordRoleLong=''\n",
" for item in range(1,NumberOfParents-1):\n",
" Role = sanitize(row[IndexDict.get(\"i_Parent{}Role\".format(item))])\n",
" if isinstance (Role,str) and Role in ValidRoles: \n",
" WordRole=Role \n",
" WordRoleLong=ExpandRole(WordRole)\n",
" DistanceToRoleClause=item\n",
" break\n",
" \n",
" # Find the number of the WG containing the clause definition\n",
" for item in range(1,NumberOfParents-1):\n",
" WGrule = sanitize(row[IndexDict.get(\"i_Parent{}Rule\".format(item))])\n",
" if row[IndexDict.get(\"i_Parent{}Class\".format(item))]=='cl' or WGrule[-2:]=='CL': \n",
" ContainedClause=sanitize(row[IndexDict.get(\"i_Parent{}WGN\".format(item))])\n",
" break\n",
"\n",
" ###############################################\n",
" # analyze and process tags #\n",
" ###############################################\n",
" \n",
" # Determine syntactic categories at word level. \n",
" PartOfSpeech=sanitize(row[IndexDict.get(\"i_class\")])\n",
" PartOfSpeechFull=ExpandSP(PartOfSpeech)\n",
" \n",
" # The folling part of code reproduces feature 'word' and 'after' that are\n",
" # currently containing incorrect data in a few specific cases.\n",
" # See https://github.com/tonyjurg/Nestle1904LFT/blob/main/resources/identifying_odd_afters.ipynb\n",
" # Get the word details and detect presence of punctuations\n",
" word=sanitize(row[IndexDict.get(\"i_unicode\")])\n",
" match = re.search(r\"([\\.·—,;])$\", word)\n",
" if match: \n",
" # The group(0) method is used to retrieve the matched punctuation sign\n",
" after=match.group(0)+' '\n",
" # Remove the punctuation from the end of the word\n",
" word=word[:-1]\n",
" else: \n",
" after=' '\n",
" \n",
" # Some attributes are not present inside some (small) books. The following is to prevent exceptions.\n",
" degree='' \n",
" if 'i_degree' in IndexDict: degree=sanitize(row[IndexDict.get(\"i_degree\")]) \n",
" subjref=''\n",
" if 'i_subjref' in IndexDict: subjref=sanitize(row[IndexDict.get(\"i_subjref\")]) \n",
"\n",
" \n",
" # Create the word slots\n",
" this_word = cv.slot()\n",
" cv.feature(this_word, \n",
" after= after,\n",
" unicode= sanitize(row[IndexDict.get(\"i_unicode\")]),\n",
" word= word,\n",
" monad= FoundWords,\n",
" orig_order= sanitize(row[IndexDict.get(\"i_word_order\")]),\n",
" book= Book,\n",
" booknumber= BookNumber,\n",
" bookshort= BookShort,\n",
" chapter= ThisChapter,\n",
" ref= sanitize(row[IndexDict.get(\"i_ref\")]),\n",
" sp= PartOfSpeech,\n",
" sp_full= PartOfSpeechFull,\n",
" verse= ThisVerse,\n",
" sentence= ThisSentence,\n",
" normalized= sanitize(row[IndexDict.get(\"i_normalized\")]),\n",
" morph= sanitize(row[IndexDict.get(\"i_morph\")]),\n",
" strongs= sanitize(row[IndexDict.get(\"i_strong\")]),\n",
" lex_dom= sanitize(row[IndexDict.get(\"i_domain\")]),\n",
" ln= sanitize(row[IndexDict.get(\"i_ln\")]),\n",
" gloss= sanitize(row[IndexDict.get(\"i_gloss\")]),\n",
" gn= sanitize(row[IndexDict.get(\"i_gender\")]),\n",
" nu= sanitize(row[IndexDict.get(\"i_number\")]),\n",
" case= sanitize(row[IndexDict.get(\"i_case\")]),\n",
" lemma= sanitize(row[IndexDict.get(\"i_lemma\")]),\n",
" person= sanitize(row[IndexDict.get(\"i_person\")]),\n",
" mood= sanitize(row[IndexDict.get(\"i_mood\")]),\n",
" tense= sanitize(row[IndexDict.get(\"i_tense\")]),\n",
" number= sanitize(row[IndexDict.get(\"i_number\")]),\n",
" voice= sanitize(row[IndexDict.get(\"i_voice\")]),\n",
" degree= degree,\n",
" type= sanitize(row[IndexDict.get(\"i_type\")]),\n",
" reference= sanitize(row[IndexDict.get(\"i_ref\")]), \n",
" subj_ref= subjref,\n",
" nodeID= sanitize(row[4]), #this is a fixed position in dataframe\n",
" wordrole= WordRole,\n",
" wordrolelong= WordRoleLong,\n",
" wordlevel= NumberOfParents-1,\n",
" roleclausedistance = DistanceToRoleClause,\n",
" containedclause = ContainedClause\n",
" )\n",
" cv.terminate(this_word)\n",
"\n",
" \n",
" '''\n",
" wrap up the book. At the end of the book we need to close all nodes in proper order.\n",
" ''' \n",
" # close all open WordGroup nodes\n",
" for item in WordGroupList:\n",
" #cv.feature(WordGroupDict[(item,4)], add some stats?)\n",
" cv.terminate(WordGroupDict[item,4])\n",
"\n",
" cv.terminate(ThisSentencePointer)\n",
" cv.terminate(ThisVersePointer)\n",
" cv.terminate(ThisChapterPointer) \n",
" cv.terminate(ThisBookPointer)\n",
"\n",
" # clear dataframe for this book, clear the index dictionary\n",
" del df\n",
" IndexDict.clear()\n",
" gc.collect()\n",
" \n",
" ###############################################\n",
" # end of section executed for each book #\n",
" ###############################################\n",
"\n",
" ###############################################\n",
" # end of director function #\n",
" ###############################################\n",
" \n",
"###############################################\n",
"# Output definitions #\n",
"###############################################\n",
" \n",
"slotType = 'word' \n",
"otext = { # dictionary of config data for sections and text formats\n",
" 'fmt:text-orig-full':'{word}{after}',\n",
" 'sectionTypes':'book,chapter,verse',\n",
" 'sectionFeatures':'book,chapter,verse',\n",
" 'structureFeatures': 'book,chapter,verse',\n",
" 'structureTypes': 'book,chapter,verse',\n",
" }\n",
"\n",
"# configure metadata\n",
"generic = { # dictionary of metadata meant for all features\n",
" 'textFabriVersion': '{}'.format(VERSION), #imported from tf.parameter\n",
" 'xmlSourceLocation': 'https://github.com/tonyjurg/Nestle1904LFT/tree/main/resources/xml/20230628',\n",
" 'xmlSourceDate': 'June 28, 2023',\n",
" 'author': 'Evangelists and apostles',\n",
" 'availability': 'Creative Commons Attribution 4.0 International (CC BY 4.0)',\n",
" 'converters': 'Tony Jurg',\n",
" 'converterSource': 'https://github.com/tonyjurg/Nestle1904LFT/tree/main/resources/converter',\n",
" 'converterVersion': '0.4',\n",
" 'dataSource': 'MACULA Greek Linguistic Datasets, available at https://github.com/Clear-Bible/macula-greek/tree/main/Nestle1904/nodes',\n",
" 'editors': 'Eberhart Nestle (1904)',\n",
" 'sourceDescription': 'Greek New Testment (British Foreign Bible Society, 1904)',\n",
" 'sourceFormat': 'XML (Low Fat tree XML data)',\n",
" 'title': 'Greek New Testament (Nestle1904LFT)'\n",
" }\n",
"\n",
"# set of integer valued feature names\n",
"intFeatures = { \n",
" 'booknumber',\n",
" 'chapter',\n",
" 'verse',\n",
" 'sentence',\n",
" 'wgnum',\n",
" 'orig_order',\n",
" 'monad',\n",
" 'wglevel'\n",
" }\n",
"\n",
"# per feature dicts with metadata\n",
"featureMeta = { \n",
" 'after': {'description': 'Characters (eg. punctuations) following the word'},\n",
" 'book': {'description': 'Book name'},\n",
" 'booknumber': {'description': 'NT book number (Matthew=1, Mark=2, ..., Revelation=27)'},\n",
" 'bookshort': {'description': 'Book name (abbreviated)'},\n",
" 'chapter': {'description': 'Chapter number inside book'},\n",
" 'verse': {'description': 'Verse number inside chapter'},\n",
" 'sentence': {'description': 'Sentence number (counted per chapter)'},\n",
" 'type': {'description': 'Wordgroup type information (verb, verbless, elided, minor, etc.)'},\n",
" 'wgrule': {'description': 'Wordgroup rule information'},\n",
" 'orig_order': {'description': 'Word order (in source XML file)'},\n",
" 'monad': {'description': 'Monad (word order in the corpus)'},\n",
" 'word': {'description': 'Word as it appears in the text (excl. punctuations)'},\n",
" 'unicode': {'description': 'Word as it arears in the text in Unicode (incl. punctuations)'},\n",
" 'ref': {'description': 'ref ID'},\n",
" 'sp': {'description': 'Part of Speech (abbreviated)'},\n",
" 'sp_full': {'description': 'Part of Speech (long description)'}, \n",
" 'normalized': {'description': 'Surface word with accents normalized and trailing punctuations removed'},\n",
" 'lemma': {'description': 'Lexeme (lemma)'},\n",
" 'morph': {'description': 'Morphological tag (Sandborg-Petersen morphology)'},\n",
" # see also discussion on relation between lex_dom and ln \n",
" # @ https://github.com/Clear-Bible/macula-greek/issues/29\n",
" 'lex_dom': {'description': 'Lexical domain according to Semantic Dictionary of Biblical Greek, SDBG (not present everywhere?)'},\n",
" 'ln': {'description': 'Lauw-Nida lexical classification (not present everywhere?)'},\n",
" 'strongs': {'description': 'Strongs number'},\n",
" 'gloss': {'description': 'English gloss'},\n",
" 'gn': {'description': 'Gramatical gender (Masculine, Feminine, Neuter)'},\n",
" 'nu': {'description': 'Gramatical number (Singular, Plural)'},\n",
" 'case': {'description': 'Gramatical case (Nominative, Genitive, Dative, Accusative, Vocative)'},\n",
" 'person': {'description': 'Gramatical person of the verb (first, second, third)'},\n",
" 'mood': {'description': 'Gramatical mood of the verb (passive, etc)'},\n",
" 'tense': {'description': 'Gramatical tense of the verb (e.g. Present, Aorist)'},\n",
" 'number': {'description': 'Gramatical number of the verb'},\n",
" 'voice': {'description': 'Gramatical voice of the verb'},\n",
" 'degree': {'description': 'Degree (e.g. Comparitative, Superlative)'},\n",
" 'type': {'description': 'Gramatical type of noun or pronoun (e.g. Common, Personal)'},\n",
" 'reference': {'description': 'Reference (to nodeID in XML source data, not yet post-processes)'},\n",
" 'subj_ref': {'description': 'Subject reference (to nodeID in XML source data, not yet post-processes)'},\n",
" 'nodeID': {'description': 'Node ID (as in the XML source data, not yet post-processes)'},\n",
" 'junction': {'description': 'Junction data related to a wordgroup'},\n",
" 'wgnum': {'description': 'Wordgroup number (counted per book)'},\n",
" 'wgclass': {'description': 'Class of the wordgroup ()'},\n",
" 'wgrole': {'description': 'Role of the wordgroup (abbreviated)'},\n",
" 'wgrolelong': {'description': 'Role of the wordgroup (full)'},\n",
" 'wordrole': {'description': 'Role of the word (abbreviated)'},\n",
" 'wordrolelong':{'description': 'Role of the word (full)'},\n",
" 'wgtype': {'description': 'Wordgroup type details'},\n",
" 'clausetype': {'description': 'Clause type details'},\n",
" 'appos': {'description': 'Apposition details'},\n",
" 'wglevel': {'description': 'Number of parent wordgroups for a wordgroup'},\n",
" 'wordlevel': {'description': 'Number of parent wordgroups for a word'},\n",
" 'roleclausedistance': {'description': 'Distance to wordgroup defining the role of this word'},\n",
" 'containedclause': {'description': 'Contained clause (WG number)'}\n",
" }\n",
"\n",
"\n",
"###############################################\n",
"# the main function #\n",
"###############################################\n",
"\n",
"good = cv.walk(\n",
" director,\n",
" slotType,\n",
" otext=otext,\n",
" generic=generic,\n",
" intFeatures=intFeatures,\n",
" featureMeta=featureMeta,\n",
" warn=True,\n",
" force=False\n",
")\n",
"\n",
"if good:\n",
" print (\"done\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"ExecuteTime": {
"end_time": "2022-10-21T02:32:54.197994Z",
"start_time": "2022-10-21T02:32:53.217806Z"
}
},
"outputs": [],
"source": [
"# First, I have to laod different modules that I use for analyzing the data and for plotting:\n",
"import sys, os, collections\n",
"import pandas as pd\n",
"import numpy as np\n",
"import re\n",
"\n",
"\n",
"from tf.fabric import Fabric\n",
"from tf.app import use\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following cell loads the TextFabric files from github repository. "
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"ExecuteTime": {
"end_time": "2022-10-21T02:32:55.906200Z",
"start_time": "2022-10-21T02:32:55.012231Z"
}
},
"outputs": [
{
"data": {
"text/markdown": [
"**Locating corpus resources ...**"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"The requested app is not available offline\n",
"\t~/text-fabric-data/github/tonyjurg/Nestle1904LFT/app not found\n"
]
},
{
"data": {
"text/html": [
"Status: latest release online v0.2 versus None locally"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"downloading app, main data and requested additions ..."
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"app: ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/app"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"findAppClass: invalid syntax (~/text-fabric-data/github/tonyjurg/Nestle1904LFT/app/app.py, line 5)\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"findAppClass: Api for \"tonyjurg/Nestle1904LFT\" not loaded\n",
"The requested data is not available offline\n",
"\t~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4 not found\n"
]
},
{
"data": {
"text/html": [
"Status: latest release online v0.2 versus None locally"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"downloading app, main data and requested additions ..."
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"data: ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
" | 0.25s T otype from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 2.72s T oslots from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.56s T chapter from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.56s T after from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.69s T word from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.55s T verse from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.58s T book from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | | 0.06s C __levels__ from otype, oslots, otext\n",
" | | 1.62s C __order__ from otype, oslots, __levels__\n",
" | | 0.07s C __rank__ from otype, __order__\n",
" | | 4.30s C __levUp__ from otype, oslots, __rank__\n",
" | | 2.21s C __levDown__ from otype, __levUp__, __rank__\n",
" | | 0.06s C __characters__ from otext\n",
" | | 1.24s C __boundary__ from otype, oslots, __rank__\n",
" | | 0.05s C __sections__ from otype, oslots, otext, __levUp__, __levels__, book, chapter, verse\n",
" | | 0.26s C __structure__ from otype, oslots, otext, __rank__, __levUp__, book, chapter, verse\n",
" | 0.44s T appos from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.49s T booknumber from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.58s T bookshort from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.55s T case from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.40s T clausetype from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.62s T containedclause from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.49s T degree from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.66s T gloss from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.54s T gn from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.42s T junction from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.62s T lemma from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.59s T lex_dom from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.61s T ln from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.50s T monad from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.51s T mood from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.59s T morph from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.61s T nodeID from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.66s T normalized from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.56s T nu from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.55s T number from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.50s T orig_order from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.50s T person from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.76s T ref from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.58s T roleclausedistance from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.49s T rule from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.51s T sentence from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.60s T sp from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.58s T sp_full from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.61s T strongs from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.53s T subj_ref from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.53s T tense from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.53s T type from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.68s T unicode from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.51s T voice from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.47s T wgclass from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.41s T wglevel from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.43s T wgnum from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.43s T wgrole from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.44s T wgrolelong from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.41s T wgtype from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.58s T wordlevel from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.58s T wordrole from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n",
" | 0.60s T wordrolelong from ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/tf/0.4\n"
]
},
{
"data": {
"text/html": [
"\n",
" Text-Fabric: Text-Fabric API 11.4.10, tonyjurg/Nestle1904LFT/app v3, Search Reference
\n",
" Data: tonyjurg - Nestle1904LFT 0.4, Character table, Feature docs
\n",
" Node types
\n",
"\n",
" \n",
" Name | \n",
" # of nodes | \n",
" # slots/node | \n",
" % coverage | \n",
"
\n",
"\n",
"\n",
" book | \n",
" 27 | \n",
" 5102.93 | \n",
" 100 | \n",
"
\n",
"\n",
"\n",
" chapter | \n",
" 260 | \n",
" 529.92 | \n",
" 100 | \n",
"
\n",
"\n",
"\n",
" verse | \n",
" 7943 | \n",
" 17.35 | \n",
" 100 | \n",
"
\n",
"\n",
"\n",
" sentence | \n",
" 8011 | \n",
" 17.20 | \n",
" 100 | \n",
"
\n",
"\n",
"\n",
" wg | \n",
" 113447 | \n",
" 7.58 | \n",
" 624 | \n",
"
\n",
"\n",
"\n",
" word | \n",
" 137779 | \n",
" 1.00 | \n",
" 100 | \n",
"
\n",
"
\n",
" Sets: no custom sets
\n",
" Features:
\n",
"Nestle 1904
\n",
" \n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Characters (eg. punctuations) following the word\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Apposition details\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Book name\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
int
\n",
"\n",
"
NT book number (Matthew=1, Mark=2, ..., Revelation=27)\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Book name (abbreviated)\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Gramatical case (Nominative, Genitive, Dative, Accusative, Vocative)\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
int
\n",
"\n",
"
Chapter number inside book\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Clause type details\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Contained clause (WG number)\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Degree (e.g. Comparitative, Superlative)\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
English gloss\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Gramatical gender (Masculine, Feminine, Neuter)\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Junction data related to a wordgroup\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Lexeme (lemma)\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Lexical domain according to Semantic Dictionary of Biblical Greek, SDBG (not present everywhere?)\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Lauw-Nida lexical classification (not present everywhere?)\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
int
\n",
"\n",
"
Monad (currently: order of words in XML tree file!)\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Gramatical mood of the verb (passive, etc)\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Morphological tag (Sandborg-Petersen morphology)\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Node ID (as in the XML source data, not yet post-processes)\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Surface word stripped of punctations\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Gramatical number (Singular, Plural)\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Gramatical number of the verb\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
int
\n",
"\n",
"
Word order within corpus (per book)\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Gramatical person of the verb (first, second, third)\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
ref Id\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
distance to wordgroup defining the role of this word\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Wordgroup rule information \n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
int
\n",
"\n",
"
Sentence number (counted per chapter)\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Part of Speech (abbreviated)\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Part of Speech (long description)\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Strongs number\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Subject reference (to nodeID in XML source data, not yet post-processes)\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Gramatical tense of the verb (e.g. Present, Aorist)\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Gramatical type of noun or pronoun (e.g. Common, Personal)\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Word as it arears in the text in Unicode (incl. punctuations)\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
int
\n",
"\n",
"
Verse number inside chapter\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Gramatical voice of the verb\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Class of the wordgroup ()\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
int
\n",
"\n",
"
number of parent wordgroups for a wordgroup\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
int
\n",
"\n",
"
Wordgroup number (counted per book)\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Role of the wordgroup (abbreviated)\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Role of the wordgroup (full)\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Wordgroup type details\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Word as it appears in the text (excl. punctuations)\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
number of parent wordgroups for a word\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Role of the word (abbreviated)\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
str
\n",
"\n",
"
Role of the word (full)\n",
"\n",
"
\n",
"\n",
"
\n",
"
\n",
"
none
\n",
"\n",
"
\n",
"\n",
"
\n",
"\n",
"
\n",
" \n",
"\n"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
""
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"\n",
"\n"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/html": [
"
"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# Loading-the-New-Testament-Text-Fabric\n",
"N1904 = use (\"tonyjurg/Nestle1904LFT\", version=\"0.4\", hoist=globals())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.12"
},
"toc": {
"base_numbering": 1,
"nav_menu": {},
"number_sections": true,
"sideBar": true,
"skip_h1_title": false,
"title_cell": "Table of Contents",
"title_sidebar": "Contents",
"toc_cell": true,
"toc_position": {
"height": "calc(100% - 180px)",
"left": "10px",
"top": "150px",
"width": "321.391px"
},
"toc_section_display": true,
"toc_window_display": true
}
},
"nbformat": 4,
"nbformat_minor": 4
}