Converted from an OASIS Open Document
The tutorial focuses on scholars, who want to acquaint themselves with a “low threshold” — generic, simple yet powerful, and explorative — workflow of modelling, extracting, processing (queries, visualizations), and publishing of “structured data” (LOD, Semantic Web) from heterogeneous XML-sources by means of XPath. The instructors assume basic acquaintance with XML and XPath (both will be quickly revised, “cheat sheets” will be provided).
12 persons.
Own laptop with an XML-Editor (we recommend the oXygen XML editor with a trial license) and a modern web browser.
Triple-Statement-Extraction from heterogeneous XML-Resources, Basic-Research-Workflow (Statement-Formulation, Statement-Extraction, Triple-Store-Import, SPARQL-Querying, Visualization).
RDF is an abstract and high level data model and RDF statements themselve are therefore to such an extent abstractly formulated that they inevitably transcend the data heterogeneity ingrained in individualized notations and practices essential to XML encoded resources. Printed as well as digital scholarly editions in this regard for the most part oscillate between more data centered and more narrative centered verbose approaches. RDF may thus function as a bridge technology providing the means to mirror several heterogeneous XML resources onto one common abstract (meta) data layer of interconnected as well as outward pointing data points, that is easily and efficiently queryable and visualizable. In this way RDF and LOD are extending the capabilities of XML resources without cluttering of the underlying documents or imposing of external restrictions for the encoding scholars (see Brodhun, de la Iglesia, Moretton (2015), Grüntgens, Schrade (2016), 61–62); also compare Ciotti, Tomasi (2016), Ciotti (2018)).
One of the main focus points of the workshop is thus to make it clear to the participants that it is pivotal to make an effort to recognize the ‘implicit semantic potential’ already inherent in human-readable XML-resources, e.g. in the form of markup like
(‘implicit’ in this context means not being formally expressed as RDF triples in a RDF serialization format, like e.g. RDF-XML; for more detailed examples see Grüntgens, Schrade (2016), 54, Ciotti, Tomasi (2016), pars. 46–49). This potential accordingly “[has] to be transformed into explicit semantic annotations (e.g. RDF) to make the data usable for Semantic Web approaches [...]. On the whole, the process of translating XML to RDF is therefore mainly focused on the determination of general statement patterns, which can then be applied to and extracted from all resources of the data set in question.” (Grüntgens, Schrade (2016), 56).
In the above mentioned context, many scholars may well enough know about the obvious advantages of supplementing their XML resources with an additional RDF layer and nevertheless may find it difficult to bridge the gap between XML and RDF. A common obstacle is in this regard, that statement extraction from XML may deteriorate into overly complex or semi-manual data wrangling. Additionally, most ontologies, like CIDOC-CRM, are — or rather may at least be perceived as being — too complicated and intricate for a
first “low-threshold” explorative approach to statement extraction in particular and the structured data ecosystem in general.
The tutorial thus wants to mitigate both hurdles by teaching how to utilize T. Schrade’s
XTriples web service for RDF statement extraction (see Grüntgens, Schrade (2016)). The web service provides a low-threshold approach to statement extraction from
any well formed XML resource — be it a full-fledged REST-API serving TEI-XML or the entrails of a word processor file like .DOC, .ODF, or .FODT — via XPath and triple building via simple XML statements. With the aid of a configuration template written in simple XML scholars are quickly enabled to build first statements for prototypical research that are “flat”— e.g. not build upon a complex ontology — and in this regard to gradually acquaint themselves further with the extended toolbox around RDF (different serialization formats, triplestores, SPARQL, ontologies, …). These “flat” patterns, however, have nevertheless to be properly formalized and operationalized in regard to the research question. Thus, after having worked out a prototype, the scholar may (and should) subsequently lift the prototypical extraction on to a distributable level by incorporating more complex ontologies into the statements described in the configuration template hereby providing the semantic interoperability needed, e.g. for LOD (concerning this necessity see f.e. Eide, Ore (2019), 194–195).
The tutorial will start with a short introduction into the basic assumption behind the RDF data model, a short overview of a workflow in regard to statement extraction with XTriples, and a quick recap of the basics of XPath.
The tutorial will then use research data provided by the API of the scholarly edition “Der Sturm: Digitale Quellenedition zur Geschichte der internationalen Avantgarde” (
resources, e.g. not corresponding to an established ontology.
Subsequently the participants will inscribe these statement patterns into the XTriples configuration file in order to automatically extract the desired statements by means of form-style POST requests.
The participants will subsequently reiterate the process of adding statements to the configuration, extracting, and subsequently evaluating the statements. When a (first) sound basis has been achieved, the participants will make a final extraction and then import their RDF-XML files into the web based RDF4j triple store (provided for the tutorial by the instructors), where the separate graphs will be merged automatically.
Then the participants will be introduced to the basics of SPARQL. Subsequently the groups will — instructed by the instructors — perform some simple queries to become acquainted with SPARQL and the corresponding query endpoint.
Further more complicated queries that aggregate information in a tabular form, may be executed from ‘saved queries’, written earlier by the lecturers, in order to facilitate the process.
The aggregate will then be visualized with the web based service https://rawgraphs.io. To make it clear: This section’s focal point lies not on performing the most intricate queries, but rather on showing in a lucid way how and based on what kind of technological foundation a simple aggregation and subsequent visualization of RDF data may be performed, thus clearly outlining the conceptual side of the workflow as a whole.