{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Creating Text-Fabric dataset (from LowFat XML trees)\n", "\n", "
\n",
    "    Code version: 0.7 (February 20, 2024)\n",
    "    Data version: February 10, 2024 (Readme)\n",
    "
\n", "\n", "## Table of content \n", "* 1 - Introduction\n", "* 2 - Read LowFat XML data and store in pickle\n", " * 2.1 - Required libraries\n", " * 2.2 - Import various libraries\n", " * 2.3 - Initialize global data\n", " * 2.4 - Process the XML data and store dataframe in pickle\n", "* 3 - Optionaly export to aid investigation\n", " * 3.1 - Export to Excel format \n", " * 3.2 - Export to CSV format\n", "* 4 - Text-Fabric dataset production from pickle input\n", " * 4.1 - Explanation\n", " * 4.2 - Running the TF walker function\n", "* 5 - Housekeeping\n", " * 5.1 - Optionaly zip-up the pickle files\n", " * 5.2 - Publishing on gitHub" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "# 1 - Introduction \n", "##### [Back to TOC](#TOC)\n", "\n", "The source data for the conversion are the LowFat XML trees files representing the macula-greek version of the Nestle 1904 Greek New Testment (British Foreign Bible Society, 1904). The starting dataset is formatted according to Syntax diagram markup by the Global Bible Initiative (GBI). The most recent source data can be found on github https://github.com/Clear-Bible/macula-greek/tree/main/Nestle1904/lowfat. \n", "\n", "Attribution: \"MACULA Greek Linguistic Datasets, available at https://github.com/Clear-Bible/macula-greek/\". \n", "\n", "The production of the Text-Fabric files consist of two phases. First one is the creation of piclke files (section 2). The second phase is the the actual Text-Fabric creation process (section 3). The process can be depicted as follows:\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "# 2 - Read LowFat XML data and store in pickle \n", "##### [Back to TOC](#TOC)\n", "\n", "This script harvests all information from the LowFat tree data (XML nodes), puts it into a Panda DataFrame and stores the result per book in a pickle file. Note: pickling (in Python) is serialising an object into a disk file (or buffer). See also the [Python3 documentation](https://docs.python.org/3/library/pickle.html).\n", "\n", "Within the context of this script, the term 'Leaf' refers to nodes that contain the Greek word as data. These nodes are also referred to as 'terminal nodes' since they do not have any children, similar to leaves on a tree. Additionally, Parent1 represents the parent of the leaf, Parent2 represents the parent of Parent1, and so on. For a visual representation, please refer to the following diagram.\n", "\n", "\n", "\n", "For a full description of the source data see document [MACULA Greek Treebank for the Nestle 1904 Greek New Testament.pdf](https://github.com/Clear-Bible/macula-greek/blob/main/doc/MACULA%20Greek%20Treebank%20for%20the%20Nestle%201904%20Greek%20New%20Testament.pdf)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2.1 Required libraries\n", "##### [Back to TOC](#TOC)\n", "\n", "The scripts in this notebook require (beside text-fabric) a number of Python libraries to be installed in the environment (see following section).\n", "You can install any missing library from within Jupyter Notebook using either `pip` or `pip3`. (eg.: !pip3 install pandas)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "## 2.2 - Import various libraries\n", "##### [Back to TOC](#TOC)\n", "\n", "The following cell reads all required libraries by the scripts in this notebook." ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "ExecuteTime": { "end_time": "2022-10-28T02:58:14.739227Z", "start_time": "2022-10-28T02:57:38.766097Z" } }, "outputs": [], "source": [ "import pandas as pd\n", "import sys # System\n", "import os # Operating System\n", "from os import listdir\n", "from os.path import isfile, join\n", "import time\n", "import pickle\n", "import re # Regular Expressions\n", "from lxml import etree as ET\n", "from tf.fabric import Fabric\n", "from tf.convert.walker import CV\n", "from tf.parameters import VERSION\n", "from datetime import date\n", "import pickle\n", "import unicodedata\n", "from unidecode import unidecode\n", "import openpyxl" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2.3 - Initialize global data\n", "##### [Back to TOC](#TOC)\n", "\n", "The following cell initializes the global data used by the various scripts in this notebook. Many of these global variables are shared among the scripts as they relate to common entities.\n", "\n", "IMPORTANT: To ensure proper creation of the Text-Fabric files on your system, it is crucial to adjust the values of BaseDir, XmlDir, etc. to match the location of the data and the operating system you are using. In this Jupyter Notebook, Windows is the operating system employed." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "# set script version number\n", "scriptVersion='0.7'\n", "scriptDate='February 20, 2024'\n", "\n", "# Define the source and destination locations \n", "BaseDir = '..\\\\'\n", "XmlDir = BaseDir+'xml\\\\20240210\\\\'\n", "PklDir = BaseDir+'pickle\\\\20240210\\\\'\n", "XlsxDir = BaseDir+'excel\\\\20240210\\\\'\n", "CsvDir = BaseDir+'csv\\\\20240210\\\\'\n", "# note: create output directory prior running the scripts!\n", "\n", "# key: filename, [0]=bookLong, [1]=bookNum, [3]=bookShort\n", "bo2book = {'01-matthew': ['Matthew', '1', 'Matt'],\n", " '02-mark': ['Mark', '2', 'Mark'],\n", " '03-luke': ['Luke', '3', 'Luke'],\n", " '04-john': ['John', '4', 'John'],\n", " '05-acts': ['Acts', '5', 'Acts'],\n", " '06-romans': ['Romans', '6', 'Rom'],\n", " '07-1corinthians': ['I_Corinthians', '7', '1Cor'],\n", " '08-2corinthians': ['II_Corinthians', '8', '2Cor'],\n", " '09-galatians': ['Galatians', '9', 'Gal'],\n", " '10-ephesians': ['Ephesians', '10', 'Eph'],\n", " '11-philippians': ['Philippians', '11', 'Phil'],\n", " '12-colossians': ['Colossians', '12', 'Col'],\n", " '13-1thessalonians':['I_Thessalonians', '13', '1Thess'],\n", " '14-2thessalonians':['II_Thessalonians','14', '2Thess'],\n", " '15-1timothy': ['I_Timothy', '15', '1Tim'],\n", " '16-2timothy': ['II_Timothy', '16', '2Tim'],\n", " '17-titus': ['Titus', '17', 'Titus'],\n", " '18-philemon': ['Philemon', '18', 'Phlm'],\n", " '19-hebrews': ['Hebrews', '19', 'Heb'],\n", " '20-james': ['James', '20', 'Jas'],\n", " '21-1peter': ['I_Peter', '21', '1Pet'],\n", " '22-2peter': ['II_Peter', '22', '2Pet'],\n", " '23-1john': ['I_John', '23', '1John'],\n", " '24-2john': ['II_John', '24', '2John'],\n", " '25-3john': ['III_John', '25', '3John'], \n", " '26-jude': ['Jude', '26', 'Jude'],\n", " '27-revelation': ['Revelation', '27', 'Rev']}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 2.4 - Process the XML data and store dataframe in pickle\n", "##### [Back to TOC](#TOC)\n", "\n", "This code processes all 27 books in the correct order.\n", "For each book, the following is done:\n", "\n", "* create a parent-child map based upon the XML source (function buildParentMap).\n", "* loop trough the XML source to identify 'leaf' nodes and gather information regarding all its parents (function processElement) and store the results in a datalist.\n", "* After processing all the nodes the datalist is converted to a datframe and exported as a pickle file specific to that book.\n", "\n", "Once the XML data is converted to PKL files, there is no need to rerun (unless the source XML data is updated).\n", "\n", "Since the size of the pickle files can be rather large, it is advised to add the .pkl extention to the ignore list of gitHub (.gitignore)" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Extract data from XML files and store it in pickle files\n", "\tProcessing Matthew at ..\\xml\\20240210\\01-matthew.xml Found 18299 items in 1.42 seconds.\n", "\tProcessing Mark at ..\\xml\\20240210\\02-mark.xml Found 11277 items in 0.95 seconds.\n", "\tProcessing Luke at ..\\xml\\20240210\\03-luke.xml Found 19456 items in 4.10 seconds.\n", "\tProcessing John at ..\\xml\\20240210\\04-john.xml Found 15643 items in 1.24 seconds.\n", "\tProcessing Acts at ..\\xml\\20240210\\05-acts.xml Found 18393 items in 1.59 seconds.\n", "\tProcessing Romans at ..\\xml\\20240210\\06-romans.xml Found 7100 items in 0.66 seconds.\n", "\tProcessing I_Corinthians at ..\\xml\\20240210\\07-1corinthians.xml Found 6820 items in 0.58 seconds.\n", "\tProcessing II_Corinthians at ..\\xml\\20240210\\08-2corinthians.xml Found 4469 items in 0.43 seconds.\n", "\tProcessing Galatians at ..\\xml\\20240210\\09-galatians.xml Found 2228 items in 0.29 seconds.\n", "\tProcessing Ephesians at ..\\xml\\20240210\\10-ephesians.xml Found 2419 items in 0.30 seconds.\n", "\tProcessing Philippians at ..\\xml\\20240210\\11-philippians.xml Found 1630 items in 0.21 seconds.\n", "\tProcessing Colossians at ..\\xml\\20240210\\12-colossians.xml Found 1575 items in 0.23 seconds.\n", "\tProcessing I_Thessalonians at ..\\xml\\20240210\\13-1thessalonians.xml Found 1473 items in 0.14 seconds.\n", "\tProcessing II_Thessalonians at ..\\xml\\20240210\\14-2thessalonians.xml Found 822 items in 0.11 seconds.\n", "\tProcessing I_Timothy at ..\\xml\\20240210\\15-1timothy.xml Found 1588 items in 0.18 seconds.\n", "\tProcessing II_Timothy at ..\\xml\\20240210\\16-2timothy.xml Found 1237 items in 0.28 seconds.\n", "\tProcessing Titus at ..\\xml\\20240210\\17-titus.xml Found 658 items in 0.14 seconds.\n", "\tProcessing Philemon at ..\\xml\\20240210\\18-philemon.xml Found 335 items in 0.20 seconds.\n", "\tProcessing Hebrews at ..\\xml\\20240210\\19-hebrews.xml Found 4955 items in 0.42 seconds.\n", "\tProcessing James at ..\\xml\\20240210\\20-james.xml Found 1739 items in 0.35 seconds.\n", "\tProcessing I_Peter at ..\\xml\\20240210\\21-1peter.xml Found 1676 items in 0.38 seconds.\n", "\tProcessing II_Peter at ..\\xml\\20240210\\22-2peter.xml Found 1098 items in 0.21 seconds.\n", "\tProcessing I_John at ..\\xml\\20240210\\23-1john.xml Found 2136 items in 0.33 seconds.\n", "\tProcessing II_John at ..\\xml\\20240210\\24-2john.xml Found 245 items in 0.14 seconds.\n", "\tProcessing III_John at ..\\xml\\20240210\\25-3john.xml Found 219 items in 0.13 seconds.\n", "\tProcessing Jude at ..\\xml\\20240210\\26-jude.xml Found 457 items in 0.13 seconds.\n", "\tProcessing Revelation at ..\\xml\\20240210\\27-revelation.xml Found 9832 items in 0.87 seconds.\n", "Finished in 18.58 seconds.\n" ] } ], "source": [ "# Create the pickle files\n", "\n", "# Set global variables for this script\n", "WordOrder = 1\n", "CollectedItems = 0\n", "\n", "###############################################\n", "# The helper functions #\n", "###############################################\n", "\n", "def buildParentMap(tree):\n", " \"\"\"\n", " Builds a mapping of child elements to their parent elements in an XML tree.\n", " This function is useful for cases where you need to navigate from a child element\n", " up to its parent element, as the ElementTree API does not provide this functionality directly.\n", "\n", " Parameters:\n", " tree (ElementTree): An XML ElementTree object.\n", "\n", " Returns:\n", " dict: A dictionary where keys are child elements and values are their respective parent elements.\n", " \n", " Usage:\n", " To build the map:\n", " tree = ET.parse(InputFile)\n", " parentMap = buildParentMap(tree)\n", " Then, whenever you need a parent of an element:\n", " parent = getParent(someElement, parentMap)\n", " \n", " \"\"\"\n", " return {c: p for p in tree.iter() for c in p}\n", "\n", "def getParent(et, parentMap):\n", " \"\"\"\n", " Retrieves the parent element of a given element from the parent map.\n", "\n", " Parameters:\n", " et (Element): The XML element whose parent is to be found.\n", " parentMap (dict): A dictionary mapping child elements to their parents.\n", "\n", " Returns:\n", " Element: The parent element of the given element. Returns None if the parent is not found.\n", " \"\"\"\n", " return parentMap.get(et)\n", "\n", "def processElement(elem, bookInfo, WordOrder, parentMap):\n", " \"\"\"\n", " Processes an XML element to extract and augment its attributes with additional data.\n", " This function adds new attributes to an element and modifies existing ones based on the provided\n", " book information, word order, and parent map. It also collects hierarchical information\n", " about the element's ancestors in the XML structure.\n", "\n", " Parameters:\n", " elem (Element): The XML element to be processed.\n", " bookInfo (tuple): A tuple containing information about the book (long name, book number, short name).\n", " WordOrder (int): The order of the word in the current processing context.\n", " parentMap (dict): A dictionary mapping child elements to their parents.\n", "\n", " Returns:\n", " tuple: A tuple containing the updated attributes of the element and the next word order.\n", " \"\"\"\n", " global CollectedItems\n", " LeafRef = re.sub(r'[!: ]', \" \", elem.attrib.get('ref')).split()\n", " elemAttrib = dict(elem.attrib) # Create a copy of the attributes using dict()\n", "\n", " # Adding new or modifying existing attributes\n", " elemAttrib.update({\n", " 'wordOrder': WordOrder,\n", " 'LeafName': elem.tag,\n", " 'word': elem.text,\n", " 'bookLong': bookInfo[0],\n", " 'bookNum': int(bookInfo[1]),\n", " 'bookShort': bookInfo[2],\n", " 'chapter': int(LeafRef[1]),\n", " 'verse': int(LeafRef[2]),\n", " 'parents': 0 # Initialize 'parents' attribute\n", " })\n", "\n", " parentnode = getParent(elem, parentMap)\n", " index = 0\n", " while parentnode is not None:\n", " index += 1\n", " parent_attribs = {\n", " f'Parent{index}Name': parentnode.tag,\n", " f'Parent{index}Type': parentnode.attrib.get('type'),\n", " f'Parent{index}Appos': parentnode.attrib.get('appositioncontainer'),\n", " f'Parent{index}Class': parentnode.attrib.get('class'),\n", " f'Parent{index}Rule': parentnode.attrib.get('rule'),\n", " f'Parent{index}Role': parentnode.attrib.get('role'),\n", " f'Parent{index}Cltype': parentnode.attrib.get('cltype'),\n", " f'Parent{index}Unit': parentnode.attrib.get('unit'),\n", " f'Parent{index}Junction': parentnode.attrib.get('junction'),\n", " f'Parent{index}SN': parentnode.attrib.get('SN'),\n", " f'Parent{index}WGN': parentnode.attrib.get('WGN')\n", " }\n", " elemAttrib.update(parent_attribs)\n", " parentnode = getParent(parentnode, parentMap)\n", "\n", " elemAttrib['parents'] = index\n", "\n", " CollectedItems += 1\n", " return elemAttrib, WordOrder + 1\n", "\n", "def fixAttributeId(tree):\n", " \"\"\"\n", " Renames attributes in an XML tree that match the pattern '{*}id' to 'id'.\n", "\n", " Parameters:\n", " tree (lxml.etree._ElementTree): The XML tree to be processed.\n", "\n", " Returns:\n", " None: The function modifies the tree in-place and does not return anything.\n", " \"\"\"\n", " # Regex pattern to match attributes like '{...}id'\n", " pattern = re.compile(r'\\{.*\\}id')\n", " for element in tree.iter():\n", " attributes_to_rename = [attr for attr in element.attrib if pattern.match(attr)]\n", " for attr in attributes_to_rename:\n", " element.attrib['id'] = element.attrib.pop(attr)\n", "\n", "###############################################\n", "# The main routine #\n", "###############################################\n", "\n", "# Process books\n", "print ('Extract data from XML files and store it in pickle files')\n", "overalTime = time.time()\n", "for bo, bookInfo in bo2book.items():\n", " CollectedItems = 0\n", " SentenceNumber = 0\n", " WordGroupNumber = 0\n", " dataList = [] # List to store data dictionaries\n", "\n", " InputFile = os.path.join(XmlDir, f'{bo}.xml')\n", " OutputFile = os.path.join(PklDir, f'{bo}.pkl')\n", " print(f'\\tProcessing {bookInfo[0]} at {InputFile} ', end='')\n", "\n", " try:\n", " tree = ET.parse(InputFile)\n", " fixAttributeId(tree)\n", " parentMap = buildParentMap(tree)\n", " except Exception as e:\n", " print(f\"Error parsing XML file {InputFile}: {e}\")\n", " continue\n", "\n", " start_time = time.time()\n", "\n", " for elem in tree.iter():\n", " if elem.tag == 'sentence':\n", " SentenceNumber += 1\n", " elem.set('SN', str(SentenceNumber))\n", " elif elem.tag == 'error': # workaround for one node found in the source XML, which is in fact a node failing analysis \n", " elem.tag = 'wg'\n", " if elem.tag == 'wg':\n", " WordGroupNumber += 1\n", " elem.set('WGN', str(WordGroupNumber))\n", " if elem.tag == 'w':\n", " elemAttrib, WordOrder = processElement(elem, bookInfo, WordOrder, parentMap)\n", " dataList.append(elemAttrib)\n", "\n", " fullDataFrame = pd.DataFrame(dataList) # Create DataFrame once after processing all elements\n", " \n", " # Open the file using a context manager\n", " with open(OutputFile, 'wb') as output:\n", " pickle.dump(fullDataFrame, output)\n", "\n", " print(f\"Found {CollectedItems} items in {time.time() - start_time:.2f} seconds.\")\n", " \n", "print(f'Finished in {time.time() - overalTime:.2f} seconds.')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# 3 - Optionaly export to aid investigation\n", "##### [Back to TOC](#TOC)\n", "\n", "This step is optional. It will allow for manual examining the input data to the Text-Fabric conversion script.\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3.1 - Export to Excel format\n", "##### [Back to TOC](#TOC)\n", "\n", "Warning: Exporting of pandas dataframes to Excel format is **very slow**." ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Please be patient. This export takes significant time\n", "\tloading ..\\pickle\\20240210\\01-matthew.pkl... done in 44.67 seconds.\n", "\tloading ..\\pickle\\20240210\\02-mark.pkl... done in 29.77 seconds.\n", "\tloading ..\\pickle\\20240210\\03-luke.pkl... done in 397.39 seconds.\n", "\tloading ..\\pickle\\20240210\\04-john.pkl... done in 36.17 seconds.\n", "\tloading ..\\pickle\\20240210\\05-acts.pkl... done in 108.53 seconds.\n", "\tloading ..\\pickle\\20240210\\06-romans.pkl... done in 23.14 seconds.\n", "\tloading ..\\pickle\\20240210\\07-1corinthians.pkl... done in 18.10 seconds.\n", "\tloading ..\\pickle\\20240210\\08-2corinthians.pkl... done in 11.85 seconds.\n", "\tloading ..\\pickle\\20240210\\09-galatians.pkl... done in 5.26 seconds.\n", "\tloading ..\\pickle\\20240210\\10-ephesians.pkl... done in 8.84 seconds.\n", "\tloading ..\\pickle\\20240210\\11-philippians.pkl... done in 3.85 seconds.\n", "\tloading ..\\pickle\\20240210\\12-colossians.pkl... done in 4.56 seconds.\n", "\tloading ..\\pickle\\20240210\\13-1thessalonians.pkl... done in 5.08 seconds.\n", "\tloading ..\\pickle\\20240210\\14-2thessalonians.pkl... done in 1.85 seconds.\n", "\tloading ..\\pickle\\20240210\\15-1timothy.pkl... done in 4.43 seconds.\n", "\tloading ..\\pickle\\20240210\\16-2timothy.pkl... done in 4.54 seconds.\n", "\tloading ..\\pickle\\20240210\\17-titus.pkl... done in 3.28 seconds.\n", "\tloading ..\\pickle\\20240210\\18-philemon.pkl... done in 0.70 seconds.\n", "\tloading ..\\pickle\\20240210\\19-hebrews.pkl... done in 11.67 seconds.\n", "\tloading ..\\pickle\\20240210\\20-james.pkl... done in 4.99 seconds.\n", "\tloading ..\\pickle\\20240210\\21-1peter.pkl... done in 5.59 seconds.\n", "\tloading ..\\pickle\\20240210\\22-2peter.pkl... done in 2.38 seconds.\n", "\tloading ..\\pickle\\20240210\\23-1john.pkl... done in 4.89 seconds.\n", "\tloading ..\\pickle\\20240210\\24-2john.pkl... done in 0.47 seconds.\n", "\tloading ..\\pickle\\20240210\\25-3john.pkl... done in 0.34 seconds.\n", "\tloading ..\\pickle\\20240210\\26-jude.pkl... done in 0.98 seconds.\n", "\tloading ..\\pickle\\20240210\\27-revelation.pkl... done in 25.78 seconds.\n", "\n", "Finished in 769.16 seconds.\n" ] } ], "source": [ "# Pre-construct the base paths for input and output since they remain constant\n", "baseInputPath = os.path.join(PklDir, '{}.pkl')\n", "baseOutputPath = os.path.join(XlsxDir, '{}.xlsx')\n", "\n", "print('Exporting Pickle files to Excel format. Please be patient. This export takes significant time')\n", "overalTime = time.time()\n", "errorCondition=False\n", "\n", "for directory in (PklDir,XlsxDir):\n", " if not os.path.exists(directory):\n", " print(f\"Script aborted. The directory '{directory}' does not exist.\")\n", " errorCondition=True\n", "\n", "# Load books in order\n", "if not errorCondition:\n", " for bo in bo2book:\n", " startTime = time.time()\n", "\n", " # Use formatted strings for file names\n", " inputFile = baseInputPath.format(bo)\n", " outputFile = baseOutputPath.format(bo)\n", "\n", " print(f'\\tProcessing {inputFile} ...', end='')\n", "\n", " # Use context manager for reading pickle file\n", " with open(inputFile, 'rb') as pklFile:\n", " df = pickle.load(pklFile)\n", "\n", " # Export to Excel\n", " df.to_excel(outputFile, index=False)\n", "\n", " # Print the time taken for processing\n", " print(f' done in {time.time() - startTime:.2f} seconds.')\n", " \n", " print(f'Finished in {time.time() - overalTime:.2f} seconds.')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 3.2 - Export to CSV format\n", "##### [Back to TOC](#TOC)\n", "\n", "Exporting the pandas datframes to CSV format is fast. This file can easily be loaded into excel." ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Exporting pickle files to CSV formated files\n", "\tProcessing ..\\pickle\\20240210\\01-matthew.pkl ... done in 0.68 seconds.\n", "\tProcessing ..\\pickle\\20240210\\02-mark.pkl ... done in 0.45 seconds.\n", "\tProcessing ..\\pickle\\20240210\\03-luke.pkl ... done in 5.76 seconds.\n", "\tProcessing ..\\pickle\\20240210\\04-john.pkl ... done in 0.82 seconds.\n", "\tProcessing ..\\pickle\\20240210\\05-acts.pkl ... done in 0.92 seconds.\n", "\tProcessing ..\\pickle\\20240210\\06-romans.pkl ... done in 0.87 seconds.\n", "\tProcessing ..\\pickle\\20240210\\07-1corinthians.pkl ... done in 0.79 seconds.\n", "\tProcessing ..\\pickle\\20240210\\08-2corinthians.pkl ... done in 0.45 seconds.\n", "\tProcessing ..\\pickle\\20240210\\09-galatians.pkl ... done in 0.22 seconds.\n", "\tProcessing ..\\pickle\\20240210\\10-ephesians.pkl ... done in 0.34 seconds.\n", "\tProcessing ..\\pickle\\20240210\\11-philippians.pkl ... done in 0.20 seconds.\n", "\tProcessing ..\\pickle\\20240210\\12-colossians.pkl ... done in 0.08 seconds.\n", "\tProcessing ..\\pickle\\20240210\\13-1thessalonians.pkl ... done in 0.06 seconds.\n", "\tProcessing ..\\pickle\\20240210\\14-2thessalonians.pkl ... done in 0.04 seconds.\n", "\tProcessing ..\\pickle\\20240210\\15-1timothy.pkl ... done in 0.08 seconds.\n", "\tProcessing ..\\pickle\\20240210\\16-2timothy.pkl ... done in 0.08 seconds.\n", "\tProcessing ..\\pickle\\20240210\\17-titus.pkl ... done in 0.03 seconds.\n", "\tProcessing ..\\pickle\\20240210\\18-philemon.pkl ... done in 0.01 seconds.\n", "\tProcessing ..\\pickle\\20240210\\19-hebrews.pkl ... done in 0.19 seconds.\n", "\tProcessing ..\\pickle\\20240210\\20-james.pkl ... done in 0.06 seconds.\n", "\tProcessing ..\\pickle\\20240210\\21-1peter.pkl ... done in 0.10 seconds.\n", "\tProcessing ..\\pickle\\20240210\\22-2peter.pkl ... done in 0.04 seconds.\n", "\tProcessing ..\\pickle\\20240210\\23-1john.pkl ... done in 0.06 seconds.\n", "\tProcessing ..\\pickle\\20240210\\24-2john.pkl ... done in 0.01 seconds.\n", "\tProcessing ..\\pickle\\20240210\\25-3john.pkl ... done in 0.01 seconds.\n", "\tProcessing ..\\pickle\\20240210\\26-jude.pkl ... done in 0.02 seconds.\n", "\tProcessing ..\\pickle\\20240210\\27-revelation.pkl ... done in 0.40 seconds.\n", "\n", "Finished in 12.77 seconds.\n" ] } ], "source": [ "# Pre-construct the base paths for input and output since they remain constant\n", "baseInputPath = os.path.join(PklDir, '{}.pkl')\n", "baseOutputPath = os.path.join(CsvDir, '{}.csv')\n", "\n", "print ('Exporting pickle files to CSV formated files')\n", "overalTime = time.time()\n", "errorCondition=False\n", "\n", "for directory in (PklDir,CsvDir):\n", " if not os.path.exists(directory):\n", " print(f\"Script aborted. The directory '{directory}' does not exist.\")\n", " errorCondition=True\n", "\n", "# Load books in order\n", "if not errorCondition:\n", " for bo in bo2book:\n", " start_time = time.time()\n", " \n", " # Use formatted strings for file names\n", " inputFile = baseInputPath.format(bo)\n", " outputFile = baseOutputPath.format(bo)\n", " \n", " print(f'\\tProcessing {inputFile} ...', end='')\n", " \n", " try:\n", " with open(inputFile, 'rb') as pklFile:\n", " df = pickle.load(pklFile)\n", " df.to_csv(outputFile, index=False)\n", " print(f' done in {time.time() - start_time:.2f} seconds.')\n", " except pickle.UnpicklingError as e:\n", " print(f\"\\n\\tError while loading {inputFile}: {e}\")\n", " continue\n", " \n", " print(f'\\nFinished in {time.time() - overalTime:.2f} seconds.')" ] }, { "cell_type": "markdown", "metadata": { "toc": true }, "source": [ "# 4 - Text-Fabric dataset production from pickle input\n", "##### [Back to TOC](#TOC)\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4.1 - Explanation\n", "##### [Back to TOC](#TOC)\n", "\n", "This script creates the Text-Fabric files by recursive calling the TF walker function.\n", "API info: https://annotation.github.io/text-fabric/tf/convert/walker.html\n", "\n", "The pickle files created by the script in section 2.3 are stored on Github location [/resources/pickle](https://github.com/tonyjurg/Nestle1904LFT/tree/main/resources/pickle).\n", "\n", "Explanatory notes about the data interpretation logic are incorporated within the Python code of the director function." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 4.2 - Running the TF walker function\n", "##### [Back to TOC](#TOC)" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "This is Text-Fabric 12.2.2\n", "58 features found and 0 ignored\n" ] } ], "source": [ "# Load specific set of variables for the walker\n", "\n", "from tf.fabric import Fabric\n", "from tf.convert.walker import CV\n", "\n", "# setting some TF specific variables\n", "BASE = os.path.expanduser('~/github')\n", "ORG = 'tonyjurg'\n", "REPO = 'Nestle1904LFT'\n", "RELATIVE = 'tf'\n", "TF_DIR = os.path.expanduser(f'{BASE}//{ORG}//{REPO}//{RELATIVE}')\n", "VERSION = f'{scriptVersion}'\n", "TF_PATH = f'{TF_DIR}//{VERSION}'\n", "TF = Fabric(locations=TF_PATH, silent=False)\n", "cv = CV(TF)" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " 0.00s Importing data from walking through the source ...\n", " | 0.00s Preparing metadata... \n", " | SECTION TYPES: book, chapter, verse\n", " | SECTION FEATURES: book, chapter, verse\n", " | STRUCTURE TYPES: book, chapter, verse\n", " | STRUCTURE FEATURES: book, chapter, verse\n", " | TEXT FEATURES:\n", " | | text-critical unicode\n", " | | text-normalized after, normalized\n", " | | text-orig-full after, word\n", " | | text-transliterated after, wordtranslit\n", " | | text-unaccented after, wordunacc\n", " | 0.01s OK\n", " | 0.00s Following director... \n", "\tWe are loading ..\\pickle\\20240210\\01-matthew.pkl...\n", "\tWe are loading ..\\pickle\\20240210\\02-mark.pkl...\n", "\tWe are loading ..\\pickle\\20240210\\03-luke.pkl...\n", "\tWe are loading ..\\pickle\\20240210\\04-john.pkl...\n", "\tWe are loading ..\\pickle\\20240210\\05-acts.pkl...\n", "\tWe are loading ..\\pickle\\20240210\\06-romans.pkl...\n", "\tWe are loading ..\\pickle\\20240210\\07-1corinthians.pkl...\n", "\tWe are loading ..\\pickle\\20240210\\08-2corinthians.pkl...\n", "\tWe are loading ..\\pickle\\20240210\\09-galatians.pkl...\n", "\tWe are loading ..\\pickle\\20240210\\10-ephesians.pkl...\n", "\tWe are loading ..\\pickle\\20240210\\11-philippians.pkl...\n", "\tWe are loading ..\\pickle\\20240210\\12-colossians.pkl...\n", "\tWe are loading ..\\pickle\\20240210\\13-1thessalonians.pkl...\n", "\tWe are loading ..\\pickle\\20240210\\14-2thessalonians.pkl...\n", "\tWe are loading ..\\pickle\\20240210\\15-1timothy.pkl...\n", "\tWe are loading ..\\pickle\\20240210\\16-2timothy.pkl...\n", "\tWe are loading ..\\pickle\\20240210\\17-titus.pkl...\n", "\tWe are loading ..\\pickle\\20240210\\18-philemon.pkl...\n", "\tWe are loading ..\\pickle\\20240210\\19-hebrews.pkl...\n", "\tWe are loading ..\\pickle\\20240210\\20-james.pkl...\n", "\tWe are loading ..\\pickle\\20240210\\21-1peter.pkl...\n", "\tWe are loading ..\\pickle\\20240210\\22-2peter.pkl...\n", "\tWe are loading ..\\pickle\\20240210\\23-1john.pkl...\n", "\tWe are loading ..\\pickle\\20240210\\24-2john.pkl...\n", "\tWe are loading ..\\pickle\\20240210\\25-3john.pkl...\n", "\tWe are loading ..\\pickle\\20240210\\26-jude.pkl...\n", "\tWe are loading ..\\pickle\\20240210\\27-revelation.pkl...\n", " | 21s \"delete\" actions: 0\n", " | 21s \"edge\" actions: 0\n", " | 21s \"feature\" actions: 259450\n", " | 21s \"node\" actions: 121671\n", " | 21s \"resume\" actions: 9626\n", " | 21s \"slot\" actions: 137779\n", " | 21s \"terminate\" actions: 269177\n", " | 27 x \"book\" node \n", " | 260 x \"chapter\" node \n", " | 8011 x \"sentence\" node \n", " | 7943 x \"verse\" node \n", " | 105430 x \"wg\" node \n", " | 137779 x \"word\" node = slot type\n", " | 259450 nodes of all types\n", " | 21s OK\n", " | 0.00s checking for nodes and edges ... \n", " | 0.00s OK\n", " | 0.00s checking (section) features ... \n", " | 0.18s OK\n", " | 0.00s reordering nodes ...\n", " | 0.00s No slot sorting needed\n", " | 0.03s Sorting 27 nodes of type \"book\"\n", " | 0.04s Sorting 260 nodes of type \"chapter\"\n", " | 0.05s Sorting 8011 nodes of type \"sentence\"\n", " | 0.08s Sorting 7943 nodes of type \"verse\"\n", " | 0.11s Sorting 105430 nodes of type \"wg\"\n", " | 0.65s Max node = 259450\n", " | 0.65s OK\n", " | 0.00s reassigning feature values ...\n", " | | node feature \"after\" with 137779 nodes\n", " | | node feature \"book\" with 154020 nodes\n", " | | node feature \"booknumber\" with 137806 nodes\n", " | | node feature \"bookshort\" with 137806 nodes\n", " | | node feature \"case\" with 137779 nodes\n", " | | node feature \"chapter\" with 153993 nodes\n", " | | node feature \"clausetype\" with 105430 nodes\n", " | | node feature \"containedclause\" with 137779 nodes\n", " | | node feature \"degree\" with 137779 nodes\n", " | | node feature \"gloss\" with 137779 nodes\n", " | | node feature \"gn\" with 137779 nodes\n", " | | node feature \"headverse\" with 8011 nodes\n", " | | node feature \"junction\" with 105430 nodes\n", " | | node feature \"lemma\" with 137779 nodes\n", " | | node feature \"lex_dom\" with 137779 nodes\n", " | | node feature \"ln\" with 137779 nodes\n", " | | node feature \"markafter\" with 137779 nodes\n", " | | node feature \"markbefore\" with 137779 nodes\n", " | | node feature \"markorder\" with 137779 nodes\n", " | | node feature \"monad\" with 137779 nodes\n", " | | node feature \"mood\" with 137779 nodes\n", " | | node feature \"morph\" with 137779 nodes\n", " | | node feature \"nodeID\" with 137779 nodes\n", " | | node feature \"normalized\" with 137779 nodes\n", " | | node feature \"nu\" with 137779 nodes\n", " | | node feature \"number\" with 137779 nodes\n", " | | node feature \"orig_order\" with 137779 nodes\n", " | | node feature \"person\" with 137779 nodes\n", " | | node feature \"punctuation\" with 137779 nodes\n", " | | node feature \"ref\" with 137779 nodes\n", " | | node feature \"reference\" with 137779 nodes\n", " | | node feature \"roleclausedistance\" with 137779 nodes\n", " | | node feature \"sentence\" with 145790 nodes\n", " | | node feature \"sp\" with 137779 nodes\n", " | | node feature \"sp_full\" with 137779 nodes\n", " | | node feature \"strongs\" with 137779 nodes\n", " | | node feature \"subj_ref\" with 137779 nodes\n", " | | node feature \"tense\" with 137779 nodes\n", " | | node feature \"type\" with 137779 nodes\n", " | | node feature \"unicode\" with 137779 nodes\n", " | | node feature \"verse\" with 145722 nodes\n", " | | node feature \"voice\" with 137779 nodes\n", " | | node feature \"wgclass\" with 105430 nodes\n", " | | node feature \"wglevel\" with 105430 nodes\n", " | | node feature \"wgnum\" with 105430 nodes\n", " | | node feature \"wgrole\" with 105430 nodes\n", " | | node feature \"wgrolelong\" with 105430 nodes\n", " | | node feature \"wgrule\" with 105430 nodes\n", " | | node feature \"wgtype\" with 105430 nodes\n", " | | node feature \"word\" with 137779 nodes\n", " | | node feature \"wordlevel\" with 137779 nodes\n", " | | node feature \"wordrole\" with 137779 nodes\n", " | | node feature \"wordrolelong\" with 137779 nodes\n", " | | node feature \"wordtranslit\" with 137779 nodes\n", " | | node feature \"wordunacc\" with 137779 nodes\n", " | 1.76s OK\n", " 23s Features ready to write\n", " 0.00s Exporting 56 node and 1 edge and 1 configuration features to ~/github/tonyjurg/Nestle1904LFT/tf/0.7:\n", " 0.00s VALIDATING oslots feature\n", " 0.02s VALIDATING oslots feature\n", " 0.02s maxSlot= 137779\n", " 0.02s maxNode= 259450\n", " 0.03s OK: oslots is valid\n", " | 0.12s T after to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.13s T book to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.11s T booknumber to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.11s T bookshort to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.14s T case to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.13s T chapter to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.08s T clausetype to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.12s T containedclause to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.11s T degree to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.12s T gloss to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.12s T gn to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.01s T headverse to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.09s T junction to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.15s T lemma to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.12s T lex_dom to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.14s T ln to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.11s T markafter to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.11s T markbefore to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.11s T markorder to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.11s T monad to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.11s T mood to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.12s T morph to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.12s T nodeID to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.14s T normalized to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.12s T nu to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.11s T number to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.11s T orig_order to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.04s T otype to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.11s T person to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.12s T punctuation to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.12s T ref to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.12s T reference to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.11s T roleclausedistance to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.13s T sentence to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.12s T sp to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.12s T sp_full to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.12s T strongs to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.11s T subj_ref to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.12s T tense to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.11s T type to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.14s T unicode to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.12s T verse to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.11s T voice to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.09s T wgclass to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.08s T wglevel to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.09s T wgnum to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.09s T wgrole to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.09s T wgrolelong to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.09s T wgrule to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.09s T wgtype to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.14s T word to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.10s T wordlevel to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.11s T wordrole to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.12s T wordrolelong to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.12s T wordtranslit to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.14s T wordunacc to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.37s T oslots to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " | 0.00s M otext to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", " 6.66s Exported 56 node features and 1 edge features and 1 config features to ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", "done\n" ] } ], "source": [ "###############################################\n", "# Common helper functions #\n", "###############################################\n", "\n", "def sanitize(input):\n", " \"\"\"\n", " Sanitizes the input data to handle missing or undefined values.\n", "\n", " This function is used to ensure that float values and None types are converted to empty strings.\n", " Other data types are returned as-is. This is particularly useful in data processing and conversion\n", " tasks where missing data needs to be handled gracefully.\n", "\n", " Parameters:\n", " input: The data input which can be of any type.\n", "\n", " Returns:\n", " str: An empty string if the input is a float or None, otherwise returns the input as-is.\n", " \"\"\"\n", " if isinstance(input, float) or isinstance(input, type(None)):\n", " return ''\n", " else:\n", " return input\n", "\n", "\n", "def ExpandRole(input):\n", " \"\"\"\n", " Expands syntactic role abbreviations into their full descriptive names.\n", "\n", " This function is particularly useful in parsing and interpreting syntactic structures, especially\n", " in the context of language processing. The expansion is based on the syntactic categories at the clause\n", " level as described in \"MACULA Greek Treebank for the Nestle 1904 Greek New Testament\" (page 5 & 6, section 2.4).\n", "\n", " Parameters:\n", " input (str): Abbreviated syntactic role label.\n", "\n", " Returns:\n", " str: The expanded, full descriptive name of the syntactic role. Returns an empty string for unrecognized inputs.\n", " \"\"\"\n", " roleExpansions = {\n", " \"adv\": 'Adverbial',\n", " \"io\": 'Indirect Object',\n", " \"o\": 'Object',\n", " \"o2\": 'Second Object',\n", " \"s\": 'Subject',\n", " \"p\": 'Predicate',\n", " \"v\": 'Verbal',\n", " \"vc\": 'Verbal Copula',\n", " \"aux\": 'Auxiliar'\n", " }\n", " return roleExpansions.get(input, '')\n", "\n", "def ExpandSP(input):\n", " \"\"\"\n", " Expands Part of Speech (POS) label abbreviations into their full descriptive names.\n", "\n", " This function is utilized for enriching text data with clear, descriptive POS labels.\n", " The expansions are based on the syntactic categories at the word level as described in \n", " \"MACULA Greek Treebank for the Nestle 1904 Greek New Testament\" (page 6 & 7, section 2.2).\n", "\n", " Parameters:\n", " input (str): Abbreviated POS label.\n", "\n", " Returns:\n", " str: The expanded, full descriptive name of the POS label. Returns an empty string for unrecognized inputs.\n", " \"\"\"\n", " posExpansions = {\n", " 'adj': 'Adjective',\n", " 'conj': 'Conjunction',\n", " 'det': 'Determiner',\n", " 'intj': 'Interjection',\n", " 'noun': 'Noun',\n", " 'num': 'Numeral',\n", " 'prep': 'Preposition',\n", " 'ptcl': 'Particle',\n", " 'pron': 'Pronoun',\n", " 'verb': 'Verb'\n", " }\n", " return posExpansions.get(input, '')\n", "\n", "\n", "\n", "def removeAccents(text):\n", " \"\"\"\n", " Removes diacritical marks (accents) from Greek words or any text.\n", "\n", " This function is particularly useful in text processing where diacritical marks need to be\n", " removed, such as in search functionality, normalization, or comparison of strings. It leverages\n", " Unicode normalization to decompose characters into their base characters and diacritics, and then\n", " filters out the diacritics.\n", "\n", " Note: This function can be applied to any text where Unicode normalization is applicable.\n", "\n", " Parameters:\n", " text (str): The text from which accents/diacritical marks need to be removed.\n", "\n", " Returns:\n", " str: The input text with all diacritical marks removed.\n", " \"\"\"\n", " return ''.join(c for c in unicodedata.normalize('NFD', text) if unicodedata.category(c) != 'Mn')\n", "\n", "###############################################\n", "# The director routine #\n", "###############################################\n", "\n", "def director(cv):\n", " \n", " ###############################################\n", " # Innitial setup of data etc. #\n", " ###############################################\n", " NoneType = type(None) # needed as tool to validate certain data\n", " IndexDict = {} # init an empty dictionary\n", " WordGroupDict={} # init a dummy dictionary\n", " PrevWordGroupSet = WordGroupSet = []\n", " PrevWordGroupList = WordGroupList = []\n", " RootWordGroup = 0\n", " WordNumber=FoundWords=WordGroupTrack=0\n", " # The following is required to recover succesfully from an abnormal condition\n", " # in the LowFat tree data where a element is labeled as \n", " # this number is arbitrary but should be high enough not to clash with 'real' WG numbers\n", " DummyWGN=200000 # first dummy WG number\n", " \n", " # Following variables are used for textual critical data \n", " criticalMarkCharacters = \"[]()—\"\n", " punctuationCharacters = \",.;·\"\n", " translationTableMarkers = str.maketrans(\"\", \"\", criticalMarkCharacters)\n", " translationTablePunctuations = str.maketrans(\"\", \"\", punctuationCharacters)\n", " punctuations=('.',',',';','·')\n", " \n", " for bo,bookinfo in bo2book.items(): \n", " \n", " ###############################################\n", " # start of section executed for each book #\n", " ###############################################\n", " \n", " # note: bookinfo is a list! Split the data\n", " Book = bookinfo[0] \n", " BookNumber = int(bookinfo[1])\n", " BookShort = bookinfo[2]\n", " BookLoc = os.path.join(PklDir, f'{bo}.pkl') \n", " \n", " # load data for this book into a dataframe. \n", " # make sure wordorder is correct\n", " print(f'\\tWe are loading {BookLoc}...')\n", " pkl_file = open(BookLoc, 'rb')\n", " df_unsorted = pickle.load(pkl_file)\n", " pkl_file.close()\n", " \n", " '''\n", " Fill dictionary of column names for this book \n", " sort to ensure proper wordorder\n", " '''\n", " ItemsInRow=1\n", " for itemName in df_unsorted.columns.to_list():\n", " IndexDict.update({'i_{}'.format(itemName): ItemsInRow})\n", " # This is to identify the collumn containing the key to sort upon\n", " if itemName==\"id\": SortKey=ItemsInRow-1\n", " ItemsInRow+=1\n", " \n", " df=df_unsorted.sort_values(by=df_unsorted.columns[SortKey])\n", " del df_unsorted\n", "\n", " # Set up nodes for new book\n", " ThisBookPointer = cv.node('book')\n", " cv.feature(ThisBookPointer, book=Book, booknumber=BookNumber, bookshort=BookShort)\n", " \n", " ThisChapterPointer = cv.node('chapter')\n", " cv.feature(ThisChapterPointer, chapter=1, book=Book)\n", " PreviousChapter=1\n", " \n", " ThisVersePointer = cv.node('verse')\n", " cv.feature(ThisVersePointer, verse=1, chapter=1, book=Book)\n", " PreviousVerse=1\n", " \n", " ThisSentencePointer = cv.node('sentence')\n", " cv.feature(ThisSentencePointer, sentence=1, headverse=1, chapter=1, book=Book)\n", " PreviousSentence=1\n", "\n", " ###############################################\n", " # Iterate through words and construct objects #\n", " ###############################################\n", " \n", " for row in df.itertuples():\n", " WordNumber += 1\n", " FoundWords +=1\n", " \n", " # Detect and act upon changes in sentences, verse and chapter \n", " # the order of terminating and creating the nodes is critical: \n", " # close verse - close chapter - open chapter - open verse \n", " NumberOfParents = sanitize(row[IndexDict.get(\"i_parents\")])\n", " ThisSentence=int(row[IndexDict.get(\"i_Parent{}SN\".format(NumberOfParents-1))])\n", " ThisVerse = sanitize(row[IndexDict.get(\"i_verse\")])\n", " ThisChapter = sanitize(row[IndexDict.get(\"i_chapter\")])\n", " \n", " if (ThisVerse!=PreviousVerse):\n", " cv.terminate(ThisVersePointer)\n", " \n", " if (ThisSentence!=PreviousSentence):\n", " cv.terminate(ThisSentencePointer)\n", " \n", " \n", " if (ThisChapter!=PreviousChapter):\n", " cv.terminate(ThisChapterPointer)\n", " PreviousChapter = ThisChapter\n", " ThisChapterPointer = cv.node('chapter')\n", " cv.feature(ThisChapterPointer, chapter=ThisChapter, book=Book)\n", " \n", " if (ThisVerse!=PreviousVerse):\n", " PreviousVerse = ThisVerse \n", " ThisVersePointer = cv.node('verse')\n", " cv.feature(ThisVersePointer, verse=ThisVerse, chapter=ThisChapter, book=Book)\n", " \n", " if (ThisSentence!=PreviousSentence):\n", " PreviousSentence=ThisSentence\n", " ThisSentencePointer = cv.node('sentence')\n", " cv.feature(ThisSentencePointer, sentence=ThisSentence, headverse=ThisVerse, chapter=ThisChapter, book=Book) \n", " \n", " ###############################################\n", " # analyze and process tags #\n", " ###############################################\n", " \n", " PrevWordGroupList=WordGroupList\n", " WordGroupList=[] # stores current active WordGroup numbers\n", " \n", " for i in range(NumberOfParents-2,0,-1): # important: reversed itteration!\n", " \n", " _WGN=int(row[IndexDict.get(\"i_Parent{}WGN\".format(i))]) \n", " if _WGN!='':\n", " WGN=int(_WGN)\n", " if WGN!='':\n", " WGclass=sanitize(row[IndexDict.get(\"i_Parent{}Class\".format(i))])\n", " WGrule=sanitize(row[IndexDict.get(\"i_Parent{}Rule\".format(i))])\n", " WGtype=sanitize(row[IndexDict.get(\"i_Parent{}Type\".format(i))])\n", " if WGclass==WGrule==WGtype=='':\n", " WGclass='empty'\n", " else:\n", " #print ('---',WordGroupList)\n", " if WGN not in WordGroupList:\n", " WordGroupList.append(WGN) \n", " #print(f'append WGN={WGN}')\n", " WordGroupDict[(WGN,0)]=WGN\n", " if WGrule[-2:]=='CL' and WGclass=='': \n", " WGclass='cl*' # to simulate the way Logos presents this condition\n", " WordGroupDict[(WGN,6)]=WGclass\n", " WordGroupDict[(WGN,1)]=WGrule\n", " WordGroupDict[(WGN,8)]=WGtype\n", " WordGroupDict[(WGN,3)]=sanitize(row[IndexDict.get(\"i_Parent{}Junction\".format(i))])\n", " WordGroupDict[(WGN,2)]=sanitize(row[IndexDict.get(\"i_Parent{}Cltype\".format(i))])\n", " WordGroupDict[(WGN,7)]=sanitize(row[IndexDict.get(\"i_Parent{}Role\".format(i))])\n", " WordGroupDict[(WGN,9)]=sanitize(row[IndexDict.get(\"i_Parent{}Appos\".format(i))]) # appos is not pressent any more in the newer dataset. kept here for the time being...\n", " WordGroupDict[(WGN,10)]=NumberOfParents-1-i # = number of parent wordgroups \n", " if not PrevWordGroupList==WordGroupList:\n", " #print ('##',PrevWordGroupList,WordGroupList,NumberOfParents)\n", " if RootWordGroup != WordGroupList[0]:\n", " RootWordGroup = WordGroupList[0]\n", " SuspendableWordGoupList = []\n", " # we have a new sentence. rebuild suspendable wordgroup list\n", " # some cleaning of data may be added here to save on memmory... \n", " #for k in range(6): del WordGroupDict[item,k]\n", " for item in reversed(PrevWordGroupList):\n", " if (item not in WordGroupList):\n", " # CLOSE/SUSPEND CASE\n", " SuspendableWordGoupList.append(item)\n", " #print ('\\n close: '+str(WordGroupDict[(item,0)])+' '+ WordGroupDict[(item,6)]+' '+ WordGroupDict[(item,1)]+' '+WordGroupDict[(item,8)],end=' ') \n", " cv.terminate(WordGroupDict[(item,4)])\n", " for item in WordGroupList:\n", " if (item not in PrevWordGroupList):\n", " if (item in SuspendableWordGoupList):\n", " # RESUME CASE\n", " #print ('\\n resume: '+str(WordGroupDict[(item,0)])+' '+ WordGroupDict[(item,6)]+' '+WordGroupDict[(item,1)]+' '+WordGroupDict[(item,8)],end=' ') \n", " cv.resume(WordGroupDict[(item,4)])\n", " else:\n", " # CREATE CASE\n", " #print ('\\n create: '+str(WordGroupDict[(item,0)])+' '+ WordGroupDict[(item,6)]+' '+ WordGroupDict[(item,1)]+' '+WordGroupDict[(item,8)],end=' ')\n", " WordGroupDict[(item,4)]=cv.node('wg')\n", " WordGroupDict[(item,5)]=WordGroupTrack\n", " WordGroupTrack += 1\n", " cv.feature(WordGroupDict[(item,4)], wgnum=WordGroupDict[(item,0)], junction=WordGroupDict[(item,3)], \n", " clausetype=WordGroupDict[(item,2)], wgrule=WordGroupDict[(item,1)], wgclass=WordGroupDict[(item,6)], \n", " wgrole=WordGroupDict[(item,7)],wgrolelong=ExpandRole(WordGroupDict[(item,7)]),\n", " wgtype=WordGroupDict[(item,8)],wglevel=WordGroupDict[(item,10)])\n", " \n", " # These roles are performed either by a WG or just a single word.\n", " Role=row[IndexDict.get(\"i_role\")]\n", " ValidRoles=[\"adv\",\"io\",\"o\",\"o2\",\"s\",\"p\",\"v\",\"vc\",\"aux\"]\n", " DistanceToRoleClause=0\n", " if isinstance (Role,str) and Role in ValidRoles: \n", " # Role is assign to this word (uniqely)\n", " WordRole=Role\n", " WordRoleLong=ExpandRole(WordRole)\n", " else:\n", " # Role details needs to be taken from some uptree wordgroup \n", " WordRole=WordRoleLong=''\n", " for item in range(1,NumberOfParents-1):\n", " Role = sanitize(row[IndexDict.get(\"i_Parent{}Role\".format(item))])\n", " if isinstance (Role,str) and Role in ValidRoles: \n", " WordRole=Role \n", " WordRoleLong=ExpandRole(WordRole)\n", " DistanceToRoleClause=item\n", " break\n", " \n", " # Find the number of the WG containing the clause definition\n", " for item in range(1,NumberOfParents-1):\n", " WGrule = sanitize(row[IndexDict.get(\"i_Parent{}Rule\".format(item))])\n", " if row[IndexDict.get(\"i_Parent{}Class\".format(item))]=='cl' or WGrule[-2:]=='CL': \n", " ContainedClause=sanitize(row[IndexDict.get(\"i_Parent{}WGN\".format(item))])\n", " break\n", "\n", " ###############################################\n", " # analyze and process tags #\n", " ###############################################\n", " \n", " # Determine syntactic categories at word level. \n", " PartOfSpeech=sanitize(row[IndexDict.get(\"i_class\")])\n", " PartOfSpeechFull=ExpandSP(PartOfSpeech)\n", " \n", " # The folling part of code reproduces feature 'word' and 'after' that are\n", " # currently containing incorrect data in a few specific cases.\n", " # See https://github.com/tonyjurg/Nestle1904LFT/blob/main/resources/identifying_odd_afters.ipynb\n", " # Get the word details and detect presence of punctuations\n", " # it also creates the textual critical features\n", "\n", " rawWord=sanitize(row[IndexDict.get(\"i_unicode\")])\n", " cleanWord= rawWord.translate(translationTableMarkers)\n", " rawWithoutPunctuations=rawWord.translate(translationTablePunctuations)\n", " markBefore=markAfter=PunctuationMarkOrder=''\n", " if cleanWord[-1] in punctuations:\n", " punctuation=cleanWord[-1]\n", " after=punctuation+' '\n", " word=cleanWord[:-1]\n", " else:\n", " after=' '\n", " word=cleanWord\n", " punctuation=''\n", " if rawWithoutPunctuations!=word:\n", " markAfter=markBefore=''\n", " if rawWord.find(word)==0:\n", " markAfter=rawWithoutPunctuations.replace(word,\"\")\n", " if punctuation!='':\n", " if rawWord.find(markAfter)-rawWord.find(punctuation)>0:\n", " PunctuationMarkOrder=\"3\" # punct. before mark\n", " else:\n", " PunctuationMarkOrder=\"2\" # punct. after mark.\n", " else:\n", " PunctuationMarkOrder=\"1\" #no punctuation, mark after word\n", " else:\n", " markBefore=rawWithoutPunctuations.replace(word,\"\")\n", " PunctuationMarkOrder=\"0\" #mark is before word\n", " \n", " # Some attributes are not present inside some (small) books. The following is to prevent exceptions.\n", " degree='' \n", " if 'i_degree' in IndexDict: \n", " degree=sanitize(row[IndexDict.get(\"i_degree\")]) \n", " subjref=''\n", " if 'i_subjref' in IndexDict: \n", " subjref=sanitize(row[IndexDict.get(\"i_subjref\")]) \n", "\n", " \n", " # Create the word slots\n", " this_word = cv.slot()\n", " cv.feature(this_word, \n", " after= after,\n", " unicode= rawWord,\n", " word= word,\n", " wordtranslit= unidecode(word),\n", " wordunacc= removeAccents(word),\n", " punctuation= punctuation,\n", " markafter= markAfter,\n", " markbefore= markBefore,\n", " markorder= PunctuationMarkOrder,\n", " monad= FoundWords,\n", " orig_order= sanitize(row[IndexDict.get(\"i_wordOrder\")]),\n", " book= Book,\n", " booknumber= BookNumber,\n", " bookshort= BookShort,\n", " chapter= ThisChapter,\n", " ref= sanitize(row[IndexDict.get(\"i_ref\")]),\n", " sp= PartOfSpeech,\n", " sp_full= PartOfSpeechFull,\n", " verse= ThisVerse,\n", " sentence= ThisSentence,\n", " normalized= sanitize(row[IndexDict.get(\"i_normalized\")]),\n", " morph= sanitize(row[IndexDict.get(\"i_morph\")]),\n", " strongs= sanitize(row[IndexDict.get(\"i_strong\")]),\n", " lex_dom= sanitize(row[IndexDict.get(\"i_domain\")]),\n", " ln= sanitize(row[IndexDict.get(\"i_ln\")]),\n", " gloss= sanitize(row[IndexDict.get(\"i_gloss\")]),\n", " gn= sanitize(row[IndexDict.get(\"i_gender\")]),\n", " nu= sanitize(row[IndexDict.get(\"i_number\")]),\n", " case= sanitize(row[IndexDict.get(\"i_case\")]),\n", " lemma= sanitize(row[IndexDict.get(\"i_lemma\")]),\n", " person= sanitize(row[IndexDict.get(\"i_person\")]),\n", " mood= sanitize(row[IndexDict.get(\"i_mood\")]),\n", " tense= sanitize(row[IndexDict.get(\"i_tense\")]),\n", " number= sanitize(row[IndexDict.get(\"i_number\")]),\n", " voice= sanitize(row[IndexDict.get(\"i_voice\")]),\n", " degree= degree,\n", " type= sanitize(row[IndexDict.get(\"i_type\")]),\n", " reference= sanitize(row[IndexDict.get(\"i_ref\")]), \n", " subj_ref= subjref,\n", " nodeID= sanitize(row[IndexDict.get(\"i_id\")]),\n", " wordrole= WordRole,\n", " wordrolelong= WordRoleLong,\n", " wordlevel= NumberOfParents-1,\n", " roleclausedistance = DistanceToRoleClause,\n", " containedclause = ContainedClause\n", " )\n", " cv.terminate(this_word) \n", " \n", " \n", " '''\n", " wrap up the book. At the end of the book we need to close all nodes in proper order.\n", " ''' \n", " # close all open WordGroup nodes\n", " for item in WordGroupList:\n", " #cv.feature(WordGroupDict[(item,4)], add some stats?)\n", " cv.terminate(WordGroupDict[item,4])\n", "\n", " cv.terminate(ThisSentencePointer)\n", " cv.terminate(ThisVersePointer)\n", " cv.terminate(ThisChapterPointer) \n", " cv.terminate(ThisBookPointer)\n", "\n", " # clear dataframe for this book, clear the index dictionary\n", " del df\n", " IndexDict.clear()\n", " #gc.collect()\n", " \n", " ###############################################\n", " # end of section executed for each book #\n", " ###############################################\n", "\n", " ###############################################\n", " # end of director function #\n", " ###############################################\n", " \n", "###############################################\n", "# Output definitions #\n", "###############################################\n", "\n", "# define TF dataset granularity\n", "slotType = 'word' \n", "\n", "# dictionary of config data for sections and text formats\n", "otext = { \n", " 'fmt:text-orig-full': '{word}{after}',\n", " 'fmt:text-normalized': '{normalized}{after}',\n", " 'fmt:text-unaccented': '{wordunacc}{after}',\n", " 'fmt:text-transliterated':'{wordtranslit}{after}', \n", " 'fmt:text-critical': '{unicode} ',\n", " 'sectionTypes':'book,chapter,verse',\n", " 'sectionFeatures':'book,chapter,verse',\n", " 'structureFeatures': 'book,chapter,verse',\n", " 'structureTypes': 'book,chapter,verse',\n", " }\n", "\n", "# configure provenance metadata\n", "generic = { # dictionary of metadata meant for all features\n", " 'textFabricVersion': '{}'.format(VERSION), #imported from tf.parameter\n", " 'xmlSourceLocation': 'https://github.com/tonyjurg/Nestle1904LFT/tree/main/resources/xml/20240210',\n", " 'xmlSourceDate': 'February 10, 2024',\n", " 'author': 'Evangelists and apostles',\n", " 'availability': 'Creative Commons Attribution 4.0 International (CC BY 4.0)',\n", " 'converters': 'Tony Jurg',\n", " 'converterSource': 'https://github.com/tonyjurg/Nestle1904LFT/tree/main/resources/converter',\n", " 'converterVersion': '{} ({})'.format(scriptVersion,scriptDate),\n", " 'dataSource': 'MACULA Greek Linguistic Datasets, available at https://github.com/Clear-Bible/macula-greek/tree/main/Nestle1904/nodes',\n", " 'editors': 'Eberhart Nestle (1904)',\n", " 'sourceDescription': 'Greek New Testment (British Foreign Bible Society, 1904)',\n", " 'sourceFormat': 'XML (Low Fat tree XML data)',\n", " 'title': 'Greek New Testament (Nestle1904LFT)'\n", " }\n", "\n", "# set datatype of feature (if not listed here, they are ususaly strings)\n", "intFeatures = { \n", " 'booknumber',\n", " 'chapter',\n", " 'verse',\n", " 'sentence',\n", " 'wgnum',\n", " 'orig_order',\n", " 'monad',\n", " 'wglevel'\n", " }\n", "\n", "# per feature dicts with metadata\n", "# icon provides guidance on feature maturity (✅ = trustworthy, 🆗 = usable, ⚠️ = be carefull when using)\n", "featureMeta = { \n", " 'after': {'description': '✅ Characters (eg. punctuations) following the word'},\n", " 'book': {'description': '✅ Book name (in English language)'},\n", " 'booknumber': {'description': '✅ NT book number (Matthew=1, Mark=2, ..., Revelation=27)'},\n", " 'bookshort': {'description': '✅ Book name (abbreviated)'},\n", " 'chapter': {'description': '✅ Chapter number inside book'},\n", " 'verse': {'description': '✅ Verse number inside chapter'},\n", " 'headverse': {'description': '✅ Start verse number of a sentence'},\n", " 'sentence': {'description': '✅ Sentence number (counted per chapter)'},\n", " 'type': {'description': '✅ Wordgroup type information (e.g.verb, verbless, elided, minor)'},\n", " 'wgrule': {'description': '✅ Wordgroup rule information (e.g. Np-Appos, ClCl2, PrepNp)'},\n", " 'orig_order': {'description': '✅ Word order (in source XML file)'},\n", " 'monad': {'description': '✅ Monad (smallest token matching word order in the corpus)'},\n", " 'word': {'description': '✅ Word as it appears in the text (excl. punctuations)'},\n", " 'wordtranslit':{'description': '🆗 Transliteration of the text (in latin letters, excl. punctuations)'},\n", " 'wordunacc': {'description': '✅ Word without accents (excl. punctuations)'},\n", " 'unicode': {'description': '✅ Word as it apears in the text in Unicode (incl. punctuations)'},\n", " 'punctuation': {'description': '✅ Punctuation after word'},\n", " 'markafter': {'description': '🆗 Text critical marker after word'},\n", " 'markbefore': {'description': '🆗 Text critical marker before word'},\n", " 'markorder': {'description': ' Order of punctuation and text critical marker'},\n", " 'ref': {'description': '✅ Value of the ref ID (taken from XML sourcedata)'},\n", " 'sp': {'description': '✅ Part of Speech (abbreviated)'},\n", " 'sp_full': {'description': '✅ Part of Speech (long description)'}, \n", " 'normalized': {'description': '✅ Surface word with accents normalized and trailing punctuations removed'},\n", " 'lemma': {'description': '✅ Lexeme (lemma)'},\n", " 'morph': {'description': '✅ Morphological tag (Sandborg-Petersen morphology)'},\n", " # see also discussion on relation between lex_dom and ln \n", " # @ https://github.com/Clear-Bible/macula-greek/issues/29\n", " 'lex_dom': {'description': '✅ Lexical domain according to Semantic Dictionary of Biblical Greek, SDBG (not present everywhere?)'},\n", " 'ln': {'description': '✅ Lauw-Nida lexical classification (not present everywhere?)'},\n", " 'strongs': {'description': '✅ Strongs number'},\n", " 'gloss': {'description': '✅ English gloss'},\n", " 'gn': {'description': '✅ Gramatical gender (Masculine, Feminine, Neuter)'},\n", " 'nu': {'description': '✅ Gramatical number (Singular, Plural)'},\n", " 'case': {'description': '✅ Gramatical case (Nominative, Genitive, Dative, Accusative, Vocative)'},\n", " 'person': {'description': '✅ Gramatical person of the verb (first, second, third)'},\n", " 'mood': {'description': '✅ Gramatical mood of the verb (passive, etc)'},\n", " 'tense': {'description': '✅ Gramatical tense of the verb (e.g. Present, Aorist)'},\n", " 'number': {'description': '✅ Gramatical number of the verb (e.g. singular, plural)'},\n", " 'voice': {'description': '✅ Gramatical voice of the verb (e.g. active,passive)'},\n", " 'degree': {'description': '✅ Degree (e.g. Comparitative, Superlative)'},\n", " 'type': {'description': '✅ Gramatical type of noun or pronoun (e.g. Common, Personal)'},\n", " 'reference': {'description': '✅ Reference (to nodeID in XML source data, not yet post-processes)'},\n", " 'subj_ref': {'description': '🆗 Subject reference (to nodeID in XML source data, not yet post-processes)'},\n", " 'nodeID': {'description': '✅ Node ID (as in the XML source data)'},\n", " 'junction': {'description': '✅ Junction data related to a wordgroup'},\n", " 'wgnum': {'description': '✅ Wordgroup number (counted per book)'},\n", " 'wgclass': {'description': '✅ Class of the wordgroup (e.g. cl, np, vp)'},\n", " 'wgrole': {'description': '✅ Syntactical role of the wordgroup (abbreviated)'},\n", " 'wgrolelong': {'description': '✅ Syntactical role of the wordgroup (full)'},\n", " 'wordrole': {'description': '✅ Syntactical role of the word (abbreviated)'},\n", " 'wordrolelong':{'description': '✅ Syntactical role of the word (full)'},\n", " 'wgtype': {'description': '✅ Wordgroup type details (e.g. group, apposition)'},\n", " 'clausetype': {'description': '✅ Clause type details (e.g. Verbless, Minor)'},\n", " 'wglevel': {'description': '🆗 Number of the parent wordgroups for a wordgroup'},\n", " 'wordlevel': {'description': '🆗 Number of the parent wordgroups for a word'},\n", " 'roleclausedistance': {'description': '⚠️ Distance to the wordgroup defining the syntactical role of this word'},\n", " 'containedclause': {'description': '🆗 Contained clause (WG number)'}\n", " }\n", "\n", "\n", "###############################################\n", "# the main function #\n", "###############################################\n", "\n", "good = cv.walk(\n", " director,\n", " slotType,\n", " otext=otext,\n", " generic=generic,\n", " intFeatures=intFeatures,\n", " featureMeta=featureMeta,\n", " warn=True,\n", " force=True\n", ")\n", "\n", "if good:\n", " print (\"done\")" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "## 5 - Housekeeping\n", "##### [Back to TOC](#TOC)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "## 5.1 - Optionaly zip-up the pickle files\n", "##### [Back to TOC](#TOC)\n", "\n", "In order to save filespace, the pickle files can be zipped. The following will zip the pickle files and remove the original file. The removal of pickle files is important if their size is too large (i.e., more than 100Mb in size), leading to issues when uploaded to gitHub. Hence it is adviced to include the .pkl extention in the ignore list (.gitignore)." ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Zipping up all pickle files.\n", "\tloading ..\\pickle\\20240210\\01-matthew.pkl... Zipping done in 0.43 seconds.\n", "\tloading ..\\pickle\\20240210\\02-mark.pkl... Zipping done in 0.26 seconds.\n", "\tloading ..\\pickle\\20240210\\03-luke.pkl... Zipping done in 3.46 seconds.\n", "\tloading ..\\pickle\\20240210\\04-john.pkl... Zipping done in 0.34 seconds.\n", "\tloading ..\\pickle\\20240210\\05-acts.pkl... Zipping done in 0.48 seconds.\n", "\tloading ..\\pickle\\20240210\\06-romans.pkl... Zipping done in 0.18 seconds.\n", "\tloading ..\\pickle\\20240210\\07-1corinthians.pkl... Zipping done in 0.16 seconds.\n", "\tloading ..\\pickle\\20240210\\08-2corinthians.pkl... Zipping done in 0.10 seconds.\n", "\tloading ..\\pickle\\20240210\\09-galatians.pkl... Zipping done in 0.05 seconds.\n", "\tloading ..\\pickle\\20240210\\10-ephesians.pkl... Zipping done in 0.07 seconds.\n", "\tloading ..\\pickle\\20240210\\11-philippians.pkl... Zipping done in 0.04 seconds.\n", "\tloading ..\\pickle\\20240210\\12-colossians.pkl... Zipping done in 0.04 seconds.\n", "\tloading ..\\pickle\\20240210\\13-1thessalonians.pkl... Zipping done in 0.03 seconds.\n", "\tloading ..\\pickle\\20240210\\14-2thessalonians.pkl... Zipping done in 0.02 seconds.\n", "\tloading ..\\pickle\\20240210\\15-1timothy.pkl... Zipping done in 0.04 seconds.\n", "\tloading ..\\pickle\\20240210\\16-2timothy.pkl... Zipping done in 0.04 seconds.\n", "\tloading ..\\pickle\\20240210\\17-titus.pkl... Zipping done in 0.02 seconds.\n", "\tloading ..\\pickle\\20240210\\18-philemon.pkl... Zipping done in 0.01 seconds.\n", "\tloading ..\\pickle\\20240210\\19-hebrews.pkl... Zipping done in 0.12 seconds.\n", "\tloading ..\\pickle\\20240210\\20-james.pkl... Zipping done in 0.04 seconds.\n", "\tloading ..\\pickle\\20240210\\21-1peter.pkl... Zipping done in 0.05 seconds.\n", "\tloading ..\\pickle\\20240210\\22-2peter.pkl... Zipping done in 0.03 seconds.\n", "\tloading ..\\pickle\\20240210\\23-1john.pkl... Zipping done in 0.04 seconds.\n", "\tloading ..\\pickle\\20240210\\24-2john.pkl... Zipping done in 0.01 seconds.\n", "\tloading ..\\pickle\\20240210\\25-3john.pkl... Zipping done in 0.01 seconds.\n", "\tloading ..\\pickle\\20240210\\26-jude.pkl... Zipping done in 0.01 seconds.\n", "\tloading ..\\pickle\\20240210\\27-revelation.pkl... Zipping done in 0.24 seconds.\n", "\n", "Finished in 6.32 seconds.\n" ] } ], "source": [ "# set variable if original pickle file needs to be removed or not\n", "removeOriginal=False\n", "\n", "import zipfile\n", "import os\n", "\n", "def zipTheFile(sourceFile, destinationFile, removeOriginal):\n", " \"\"\"\n", " Create a zip file from the specified source file and optionally remove the source file.\n", "\n", " Parameters:\n", " sourceFile (str) : The file path of the source file to be zipped.\n", " destinationFile (str) : The file path for the resulting zip file.\n", " removeOriginal (bool): If True, the source file will be deleted after zipping.\n", " \"\"\"\n", " # check for existance of the file to zip\n", " if not os.path.exists(sourceFile):\n", " print(f\"\\tSource file does not exist: {sourceFile}\")\n", " return False\n", " # Get only the file name, not the full path\n", " fileNameOnly = os.path.basename(sourceFile)\n", " # Creating a zip file from the source file\n", " with zipfile.ZipFile(destinationFile, 'w', zipfile.ZIP_DEFLATED) as zipArchive:\n", " zipArchive.write(sourceFile,arcname=fileNameOnly)\n", " # Removing the source file if required\n", " if removeOriginal: \n", " os.remove(sourceFile)\n", " return True\n", "\n", "# Pre-construct the base paths for input and output since they remain constant\n", "baseInputPath = os.path.join(PklDir, '{}.pkl')\n", "baseOutputPath = os.path.join(PklDir, '{}.zip')\n", "\n", "print('Zipping up all pickle files' + (' and removing them afterwards.' if removeOriginal else '.'))\n", "overallTime = time.time()\n", "errorOccurred = False\n", "\n", "# Load books in order\n", "for bo in bo2book:\n", " startTime = time.time()\n", " \n", " # Use formatted strings for file names\n", " inputFile = baseInputPath.format(bo)\n", " outputFile = baseOutputPath.format(bo)\n", " \n", " if not zipTheFile(inputFile, outputFile, removeOriginal):\n", " errorOccurred = True\n", " break\n", " else: \n", " print(f'\\tloading {inputFile}...', end='')\n", " print(f' Zipping done in {time.time() - startTime:.2f} seconds.')\n", " \n", "if errorOccurred:\n", " print(\"Operation aborted due to an error (are all pickle files already zipped?).\")\n", "else:\n", " print(f'\\nFinished in {time.time() - overallTime:.2f} seconds.')\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5.2 - Inspect the created dataset\n", "##### [Back to TOC](#TOC)\n", "\n", "Perform some inspections on the newly created datase." ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "tags": [] }, "outputs": [], "source": [ "# Load TF code\n", "\n", "from tf.fabric import Fabric\n", "from tf.app import use" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "tags": [] }, "outputs": [ { "data": { "text/markdown": [ "**Locating corpus resources ...**" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "app: ~/text-fabric-data/github/tonyjurg/Nestle1904LFT/app" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "data: ~/github/tonyjurg/Nestle1904LFT/tf/0.7" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", " TF: TF API 12.2.2, tonyjurg/Nestle1904LFT/app v3, Search Reference
\n", " Data: tonyjurg - Nestle1904LFT 0.7, Character table, Feature docs
\n", "
Node types\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", "\n", "\n", " \n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", " \n", "\n", "\n", "\n", " \n", " \n", " \n", " \n", "\n", "
Name# of nodes# slots / node% coverage
book275102.93100
chapter260529.92100
verse794317.35100
sentence801117.20100
wg1054306.85524
word1377791.00100
\n", " Sets: no custom sets
\n", " Features:
\n", "
Nestle 1904 (Low Fat Tree)\n", "
\n", "\n", "
\n", "
\n", "after\n", "
\n", "
str
\n", "\n", " ✅ Characters (eg. punctuations) following the word\n", "\n", "
\n", "\n", "
\n", "
\n", "book\n", "
\n", "
str
\n", "\n", " ✅ Book name (in English language)\n", "\n", "
\n", "\n", "
\n", "
\n", "booknumber\n", "
\n", "
int
\n", "\n", " ✅ NT book number (Matthew=1, Mark=2, ..., Revelation=27)\n", "\n", "
\n", "\n", "
\n", "
\n", "bookshort\n", "
\n", "
str
\n", "\n", " ✅ Book name (abbreviated)\n", "\n", "
\n", "\n", "
\n", "
\n", "case\n", "
\n", "
str
\n", "\n", " ✅ Gramatical case (Nominative, Genitive, Dative, Accusative, Vocative)\n", "\n", "
\n", "\n", "
\n", "
\n", "chapter\n", "
\n", "
int
\n", "\n", " ✅ Chapter number inside book\n", "\n", "
\n", "\n", "
\n", "
\n", "clausetype\n", "
\n", "
str
\n", "\n", " ✅ Clause type details (e.g. Verbless, Minor)\n", "\n", "
\n", "\n", "
\n", "
\n", "containedclause\n", "
\n", "
str
\n", "\n", " 🆗 Contained clause (WG number)\n", "\n", "
\n", "\n", "
\n", "
\n", "degree\n", "
\n", "
str
\n", "\n", " ✅ Degree (e.g. Comparitative, Superlative)\n", "\n", "
\n", "\n", "
\n", "
\n", "gloss\n", "
\n", "
str
\n", "\n", " ✅ English gloss\n", "\n", "
\n", "\n", "
\n", "
\n", "gn\n", "
\n", "
str
\n", "\n", " ✅ Gramatical gender (Masculine, Feminine, Neuter)\n", "\n", "
\n", "\n", "
\n", "
\n", "headverse\n", "
\n", "
str
\n", "\n", " ✅ Start verse number of a sentence\n", "\n", "
\n", "\n", "
\n", "
\n", "junction\n", "
\n", "
str
\n", "\n", " ✅ Junction data related to a wordgroup\n", "\n", "
\n", "\n", "
\n", "
\n", "lemma\n", "
\n", "
str
\n", "\n", " ✅ Lexeme (lemma)\n", "\n", "
\n", "\n", "
\n", "
\n", "lex_dom\n", "
\n", "
str
\n", "\n", " ✅ Lexical domain according to Semantic Dictionary of Biblical Greek, SDBG (not present everywhere?)\n", "\n", "
\n", "\n", "
\n", "
\n", "ln\n", "
\n", "
str
\n", "\n", " ✅ Lauw-Nida lexical classification (not present everywhere?)\n", "\n", "
\n", "\n", "
\n", "
\n", "markafter\n", "
\n", "
str
\n", "\n", " 🆗 Text critical marker after word\n", "\n", "
\n", "\n", "
\n", "
\n", "markbefore\n", "
\n", "
str
\n", "\n", " 🆗 Text critical marker before word\n", "\n", "
\n", "\n", "
\n", "
\n", "markorder\n", "
\n", "
str
\n", "\n", "  Order of punctuation and text critical marker\n", "\n", "
\n", "\n", "
\n", "
\n", "monad\n", "
\n", "
int
\n", "\n", " ✅ Monad (smallest token matching word order in the corpus)\n", "\n", "
\n", "\n", "
\n", "
\n", "mood\n", "
\n", "
str
\n", "\n", " ✅ Gramatical mood of the verb (passive, etc)\n", "\n", "
\n", "\n", "
\n", "
\n", "morph\n", "
\n", "
str
\n", "\n", " ✅ Morphological tag (Sandborg-Petersen morphology)\n", "\n", "
\n", "\n", "
\n", "
\n", "nodeID\n", "
\n", "
str
\n", "\n", " ✅ Node ID (as in the XML source data)\n", "\n", "
\n", "\n", "
\n", "
\n", "normalized\n", "
\n", "
str
\n", "\n", " ✅ Surface word with accents normalized and trailing punctuations removed\n", "\n", "
\n", "\n", "
\n", "
\n", "nu\n", "
\n", "
str
\n", "\n", " ✅ Gramatical number (Singular, Plural)\n", "\n", "
\n", "\n", "
\n", "
\n", "number\n", "
\n", "
str
\n", "\n", " ✅ Gramatical number of the verb (e.g. singular, plural)\n", "\n", "
\n", "\n", "
\n", "
\n", "otype\n", "
\n", "
str
\n", "\n", " \n", "\n", "
\n", "\n", "
\n", "
\n", "person\n", "
\n", "
str
\n", "\n", " ✅ Gramatical person of the verb (first, second, third)\n", "\n", "
\n", "\n", "
\n", "
\n", "punctuation\n", "
\n", "
str
\n", "\n", " ✅ Punctuation after word\n", "\n", "
\n", "\n", "
\n", "
\n", "ref\n", "
\n", "
str
\n", "\n", " ✅ Value of the ref ID (taken from XML sourcedata)\n", "\n", "
\n", "\n", "
\n", "
\n", "reference\n", "
\n", "
str
\n", "\n", " ✅ Reference (to nodeID in XML source data, not yet post-processes)\n", "\n", "
\n", "\n", "
\n", "
\n", "roleclausedistance\n", "
\n", "
str
\n", "\n", " ⚠️ Distance to the wordgroup defining the syntactical role of this word\n", "\n", "
\n", "\n", "
\n", "
\n", "sentence\n", "
\n", "
int
\n", "\n", " ✅ Sentence number (counted per chapter)\n", "\n", "
\n", "\n", "
\n", "
\n", "sp\n", "
\n", "
str
\n", "\n", " ✅ Part of Speech (abbreviated)\n", "\n", "
\n", "\n", "
\n", "
\n", "sp_full\n", "
\n", "
str
\n", "\n", " ✅ Part of Speech (long description)\n", "\n", "
\n", "\n", "
\n", "
\n", "strongs\n", "
\n", "
str
\n", "\n", " ✅ Strongs number\n", "\n", "
\n", "\n", "
\n", "
\n", "subj_ref\n", "
\n", "
str
\n", "\n", " 🆗 Subject reference (to nodeID in XML source data, not yet post-processes)\n", "\n", "
\n", "\n", "
\n", "
\n", "tense\n", "
\n", "
str
\n", "\n", " ✅ Gramatical tense of the verb (e.g. Present, Aorist)\n", "\n", "
\n", "\n", "
\n", "
\n", "type\n", "
\n", "
str
\n", "\n", " ✅ Gramatical type of noun or pronoun (e.g. Common, Personal)\n", "\n", "
\n", "\n", "
\n", "
\n", "unicode\n", "
\n", "
str
\n", "\n", " ✅ Word as it apears in the text in Unicode (incl. punctuations)\n", "\n", "
\n", "\n", "
\n", "
\n", "verse\n", "
\n", "
int
\n", "\n", " ✅ Verse number inside chapter\n", "\n", "
\n", "\n", "
\n", "
\n", "voice\n", "
\n", "
str
\n", "\n", " ✅ Gramatical voice of the verb (e.g. active,passive)\n", "\n", "
\n", "\n", "
\n", "
\n", "wgclass\n", "
\n", "
str
\n", "\n", " ✅ Class of the wordgroup (e.g. cl, np, vp)\n", "\n", "
\n", "\n", "
\n", "
\n", "wglevel\n", "
\n", "
int
\n", "\n", " 🆗 Number of the parent wordgroups for a wordgroup\n", "\n", "
\n", "\n", "
\n", "
\n", "wgnum\n", "
\n", "
int
\n", "\n", " ✅ Wordgroup number (counted per book)\n", "\n", "
\n", "\n", "
\n", "
\n", "wgrole\n", "
\n", "
str
\n", "\n", " ✅ Syntactical role of the wordgroup (abbreviated)\n", "\n", "
\n", "\n", "
\n", "
\n", "wgrolelong\n", "
\n", "
str
\n", "\n", " ✅ Syntactical role of the wordgroup (full)\n", "\n", "
\n", "\n", "
\n", "
\n", "wgrule\n", "
\n", "
str
\n", "\n", " ✅ Wordgroup rule information (e.g. Np-Appos, ClCl2, PrepNp)\n", "\n", "
\n", "\n", "
\n", "
\n", "wgtype\n", "
\n", "
str
\n", "\n", " ✅ Wordgroup type details (e.g. group, apposition)\n", "\n", "
\n", "\n", "
\n", "
\n", "word\n", "
\n", "
str
\n", "\n", " ✅ Word as it appears in the text (excl. punctuations)\n", "\n", "
\n", "\n", "
\n", "
\n", "wordlevel\n", "
\n", "
str
\n", "\n", " 🆗 Number of the parent wordgroups for a word\n", "\n", "
\n", "\n", "
\n", "
\n", "wordrole\n", "
\n", "
str
\n", "\n", " ✅ Syntactical role of the word (abbreviated)\n", "\n", "
\n", "\n", "
\n", "
\n", "wordrolelong\n", "
\n", "
str
\n", "\n", " ✅ Syntactical role of the word (full)\n", "\n", "
\n", "\n", "
\n", "
\n", "wordtranslit\n", "
\n", "
str
\n", "\n", " 🆗 Transliteration of the text (in latin letters, excl. punctuations)\n", "\n", "
\n", "\n", "
\n", "
\n", "wordunacc\n", "
\n", "
str
\n", "\n", " ✅ Word without accents (excl. punctuations)\n", "\n", "
\n", "\n", "
\n", "
\n", "oslots\n", "
\n", "
none
\n", "\n", " \n", "\n", "
\n", "\n", "
\n", "
\n", "\n", " Settings:
specified
  1. apiVersion: 3
  2. appName: tonyjurg/Nestle1904LFT
  3. appPath:C:/Users/tonyj/text-fabric-data/github/tonyjurg/Nestle1904LFT/app
  4. commit: g28423636826427b12ab3a8d2a3f19d1281f102d2
  5. css: ''
  6. dataDisplay:
    • excludedFeatures:
      • orig_order
      • verse
      • book
      • chapter
    • noneValues:
      • none
      • unknown
      • no value
      • NA
      • ''
    • showVerseInTuple: 0
    • textFormat: text-orig-full
  7. docs:
    • docBase: https://github.com/tonyjurg/Nestle1904LFT/blob/main/docs/
    • docPage: about
    • docRoot: https://github.com/tonyjurg/Nestle1904LFT
    • featureBase:https://github.com/tonyjurg/Nestle1904LFT/blob/main/docs/features/<feature>.md
  8. interfaceDefaults: {fmt: layout-orig-full}
  9. isCompatible: True
  10. local: local
  11. localDir:C:/Users/tonyj/text-fabric-data/github/tonyjurg/Nestle1904LFT/_temp
  12. provenanceSpec:
    • corpus: Nestle 1904 (Low Fat Tree)
    • doi: 10.5281/zenodo.10182594
    • org: tonyjurg
    • relative: /tf
    • repo: Nestle1904LFT
    • repro: Nestle1904LFT
    • version: 0.7
    • webBase: https://learner.bible/text/show_text/nestle1904/
    • webHint: Show this on the Bible Online Learner website
    • webLang: en
    • webUrl:https://learner.bible/text/show_text/nestle1904/<1>/<2>/<3>
    • webUrlLex: {webBase}/word?version={version}&id=<lid>
  13. release: v0.6.3
  14. typeDisplay:
    • book:
      • condense: True
      • hidden: True
      • label: {book}
      • style: ''
    • chapter:
      • condense: True
      • hidden: True
      • label: {chapter}
      • style: ''
    • sentence:
      • hidden: 0
      • label: #{sentence} (start: {book} {chapter}:{headverse})
      • style: ''
    • verse:
      • condense: True
      • excludedFeatures: chapter verse
      • label: {book} {chapter}:{verse}
      • style: ''
    • wg:
      • hidden: 0
      • label:#{wgnum}: {wgtype} {wgclass} {clausetype} {wgrole} {wgrule} {junction}
      • style: ''
    • word:
      • base: True
      • features: lemma
      • featuresBare: gloss
      • surpress: chapter verse
  15. writing: grc
\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "\n", "\n" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "text/html": [ "
TF API: names N F E L T S C TF Fs Fall Es Eall Cs Call directly usable

" ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# load the app and data\n", "N1904 = use (\"tonyjurg/Nestle1904LFT\", version=scriptVersion, checkout=\"clone\", hoist=globals())" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "### 5.2.1 - Dump otype\n", "##### [Back to TOC](#TOC)" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "@node\n", "@author=Evangelists and apostles\n", "@availability=Creative Commons Attribution 4.0 International (CC BY 4.0)\n", "@converterSource=https://github.com/tonyjurg/Nestle1904LFT/tree/main/resources/converter\n", "@converterVersion=0.7 (February 20, 2024)\n", "@converters=Tony Jurg\n", "@dataSource=MACULA Greek Linguistic Datasets, available at https://github.com/Clear-Bible/macula-greek/tree/main/Nestle1904/nodes\n", "@editors=Eberhart Nestle (1904)\n", "@sourceDescription=Greek New Testment (British Foreign Bible Society, 1904)\n", "@sourceFormat=XML (Low Fat tree XML data)\n", "@textFabricVersion=0.7\n", "@title=Greek New Testament (Nestle1904LFT)\n", "@valueType=str\n", "@xmlSourceDate=February 10, 2024\n", "@xmlSourceLocation=https://github.com/tonyjurg/Nestle1904LFT/tree/main/resources/xml/20240210\n", "@writtenBy=Text-Fabric\n", "@dateWritten=2024-02-20T13:52:34Z\n", "\n", "1-137779\tword\n", "137780-137806\tbook\n", "137807-138066\tchapter\n", "138067-146077\tsentence\n", "146078-154020\tverse\n", "154021-259450\twg\n", "\n" ] } ], "source": [ "with open(f'{TF_PATH}/otype.tf') as fh:\n", " print(fh.read())" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "### 5.2.2 - Dump otext\n", "##### [Back to TOC](#TOC)" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "@config\n", "@author=Evangelists and apostles\n", "@availability=Creative Commons Attribution 4.0 International (CC BY 4.0)\n", "@converterSource=https://github.com/tonyjurg/Nestle1904LFT/tree/main/resources/converter\n", "@converterVersion=0.7 (February 20, 2024)\n", "@converters=Tony Jurg\n", "@dataSource=MACULA Greek Linguistic Datasets, available at https://github.com/Clear-Bible/macula-greek/tree/main/Nestle1904/nodes\n", "@editors=Eberhart Nestle (1904)\n", "@fmt:text-critical={unicode} \n", "@fmt:text-normalized={normalized}{after}\n", "@fmt:text-orig-full={word}{after}\n", "@fmt:text-transliterated={wordtranslit}{after}\n", "@fmt:text-unaccented={wordunacc}{after}\n", "@sectionFeatures=book,chapter,verse\n", "@sectionTypes=book,chapter,verse\n", "@sourceDescription=Greek New Testment (British Foreign Bible Society, 1904)\n", "@sourceFormat=XML (Low Fat tree XML data)\n", "@structureFeatures=book,chapter,verse\n", "@structureTypes=book,chapter,verse\n", "@textFabricVersion=0.7\n", "@title=Greek New Testament (Nestle1904LFT)\n", "@xmlSourceDate=February 10, 2024\n", "@xmlSourceLocation=https://github.com/tonyjurg/Nestle1904LFT/tree/main/resources/xml/20240210\n", "@writtenBy=Text-Fabric\n", "@dateWritten=2024-02-20T13:52:38Z\n", "\n", "\n" ] } ], "source": [ "with open(f'{TF_PATH}/otext.tf') as fh:\n", " print(fh.read())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5.3 - Publish it on GitHub\n", "##### [Back to TOC](#TOC)\n", "\n", "The following section will first load the created Text-Fabric dataset. Then it will publish it on gitHub." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "tags": [] }, "outputs": [], "source": [ "# Define the repository\n", "ORG = \"tonyjurg\"\n", "REPO = \"Nestle1904LFT\"\n", "\n", "# Added details for the release\n", "MESSAGE = \"New release\"\n", "DESCRIPTION = \"\"\"\n", "This release uses a new dataset. \n", "\n", "The main difference is in feature Strongs:\n", " * Some errors were corrected\n", " * composite words are now with two or more Strong values\n", " \n", "This release has been published with the command `A.publish()`, a function in Text-Fabric.\n", "\"\"\"" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Working in repo ~/github/tonyjurg/Nestle1904LFT\n", "Make a new commit ...\n", "Compute a new tag ...\n", "Latest release = v0.6.4\n", "New release = v0.6.5\n", "Push the repo to GitHub, including tag v0.6.5\n", "Turn the tag into a release on GitHub with additional data\n", "responce: url: https://api.github.com/repos/tonyjurg/Nestle1904LFT/releases\n", "Create the zip file with the complete data\n", "Data to be zipped:\n", "\tOK app (v0.6.5 a0d7a9) : ~/github/tonyjurg/Nestle1904LFT/app\n", "\tOK main data (v0.6.5 a0d7a9) : ~/github/tonyjurg/Nestle1904LFT/tf/0.7\n", "Writing zip file ...\n", "Upload the zip file and attach it to the release on GitHub\n" ] }, { "ename": "KeyError", "evalue": "'id'", "output_type": "error", "traceback": [ "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[1;31mKeyError\u001b[0m Traceback (most recent call last)", "Cell \u001b[1;32mIn[7], line 1\u001b[0m\n\u001b[1;32m----> 1\u001b[0m N1904\u001b[38;5;241m.\u001b[39mpublishRelease(\u001b[38;5;241m3\u001b[39m, message\u001b[38;5;241m=\u001b[39mMESSAGE, description\u001b[38;5;241m=\u001b[39mDESCRIPTION)\n", "File \u001b[1;32m~\\anaconda3\\envs\\Text-Fabric\\Lib\\site-packages\\tf\\advanced\\repo.py:954\u001b[0m, in \u001b[0;36mpublishRelease\u001b[1;34m(app, increase, message, description)\u001b[0m\n\u001b[0;32m 950\u001b[0m binFile \u001b[38;5;241m=\u001b[39m baseNm(binPath)\n\u001b[0;32m 952\u001b[0m console(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mUpload the zip file and attach it to the release on GitHub\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[1;32m--> 954\u001b[0m releaseId \u001b[38;5;241m=\u001b[39m response\u001b[38;5;241m.\u001b[39mjson()[\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mid\u001b[39m\u001b[38;5;124m\"\u001b[39m]\n\u001b[0;32m 955\u001b[0m headers[\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mContent-Type\u001b[39m\u001b[38;5;124m\"\u001b[39m] \u001b[38;5;241m=\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mapplication/zip\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 956\u001b[0m uploadUrl \u001b[38;5;241m=\u001b[39m (\n\u001b[0;32m 957\u001b[0m \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mbUrlUpload\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m/repos/\u001b[39m\u001b[38;5;132;01m{\u001b[39;00morg\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m/\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mrepo\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m/releases/\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mreleaseId\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m/assets?name=\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mbinFile\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m\"\u001b[39m\n\u001b[0;32m 958\u001b[0m )\n", "\u001b[1;31mKeyError\u001b[0m: 'id'" ] } ], "source": [ "N1904.publishRelease(3, message=MESSAGE, description=DESCRIPTION)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.5" }, "toc": { "base_numbering": 1, "nav_menu": {}, "number_sections": true, "sideBar": true, "skip_h1_title": false, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": true, "toc_position": { "height": "calc(100% - 180px)", "left": "10px", "top": "150px", "width": "321.391px" }, "toc_section_display": true, "toc_window_display": true } }, "nbformat": 4, "nbformat_minor": 4 }