<title type="main">Music notation addressability Viglianti Raffaele University of Maryland rviglian@umd.edu 2016-03-04T12:02:21.587059110 Maciej Eder, Pedagogical University in Krakow Jan Rybicki, Jagiellonian University
Institute of Polish Studies Pedagogical University ul. Podchorazych 2 30-084 Krakow, Poland maciej.eder@ijp-pan.krakow.pl

Converted from an OASIS Open Document

Paper Long Paper text and music music notation web citation API information retrieval music software design and development information architecture internet / world wide web interdisciplinary collaboration digitisation - theory and practice semantic web linking and annotation standards and interoperability English
Introduction

How can one virtually ‘circle’ some music notation as one would on a printed score? How can a machine interpret this ‘circling’ to select and retrieve the relevant music notation in digital format? This paper will introduce the concept of addressability for music notation, on the basis of a comparison with textual addressability as defined by Michael Witmore (2010). Additionally, the paper will report on the work of Enhancing Music notation Addressability (EMA), a NEH-funded one-year project that has developed methods for addressing arbitrary portions of encoded music notation on the web.

Many Digital Humanities projects are concerned with the digitization of cultural objects for varied purposes of study and dissemination. Theorists such as Willard McCarty (2005) and Julia Flanders (2009) have highlighted the fact that digitization involves the creation of a data model of a cultural object, whereby scholarly interpretation and analysis is inevitably included in the model. Editorial projects in literary studies, for example, often model sources by encoding transcription and editorial intervention with the Text Encoding Initiative (TEI) format. The ability to identify and name textual structures is a fundamental operation in the creation of such models. Michael Witmore has called text a “massively addressable object” (2010); that is, given certain abstractions and conventions, it is possible to identify areas of a text such as characters, words, as well as chapters or proper names. Reading practices influence and contribute to the development of such conventions and abstractions, but, Witmore argues, addressability is a textual condition regardless of technology. With digital texts, modes of address become more abstract, so that arbitrary taxonomies can be identified as well as more established ones. To exemplify a more abstract mode of address, Witmore suggests items “identified as a ‘History’ by Heminges and Condell in the First Folio”. This enhanced addressability available in a digital context is the engine for textual analysis and scholarly discourse about digital text.

This idea of addressability is arguably applicable to many more kinds of “text”, including music notation; indeed, addressing units of music notation (such as measures, notes, and phrases) has long been a powerful instrument in musicology for both analysis and historical narrative. When talking about music in general, it is important to say that addressing written music notation is not the only instrument of the musicologist. Music exists on several domains besides the written or "graphemic" one, each addressable in its own way (see Babbitt 1965). For the purpose of this paper, we focus on written Western music notation, because it shares features with written language and for its prominent role in musicological discourse. Music notation, however, is more complicated to represent digitally than text. Human-computer interaction has since its early days been built around the concept of character and line, which makes dealing with “plain” text a fairly straightforward matter for many basic operations; counting the number of characters in a given plain text document is trivial in any digital environment. Modern computing systems are able to support complex ancient and modern writing systems, including those requiring right-to-left strings and compound symbols. The Unicode Consortium has been at the forefront of the internationalization of computing systems. Nonetheless, computationally speaking, a “string” of text remains a sequence of characters even in more complex representations. Indeed, many compound Unicode characters still retain sequentiality, i.e. one component comes after the other and the compound symbol only makes sense if they are in the correct order. Music notation is not a string of text; therefore this is not possible. Music notation, on the other hand, requires substantial computational modelling even for the simplest musical text before any further operation is possible. This is particularly evident when music notation is represented with markup, which implies a system based on characters and lines. There are many different ways of representing a single note; some aspects are common to all representation systems, such as information about pitch and duration, but some systems will prioritize certain aspects over others. To give a simple example, one system may represent beams (ligatures between flagged notes, usually shorter in duration), while others may ignore them altogether. By grouping notes together, beams provide important—but somewhat secondary to pitch and duration—information to the reader of a music score, such as a performer, a musicologist, or an algorithm.

Nonetheless, there are simple units that are typically represented by all music notation systems for common western music notation, such as measure, staff (or instrument), and beat. The EMA project, therefore, developed a URI scheme and an Application Programming Interface (API) to make it possible to target music notation resources on the web regardless of their format. Such a scheme may facilitate (and in some cases enable) a number of activities around music notation documents published on the web. The following table gives a few basic examples of how an implementation of the URI scheme could be useful to musicological research:

Scholarly Visual Procedural Analysis: being able to address components of music notation for analytical purposes. Example: precisely identify start and end of a pedal tone in Bach’s Prelude no. 6 in D Minor, BWV 851. Rendering: rendering music notation in an interactive environment such as a browser or a tablet requires the ability to cut up a large music document. For example to show only the number of measures that fit in a given space. Processing: extracted portions of music notation can be passed on to another process. For example, given the MEI encoding of the Overture to Mozart’s Don Giovanni, extract the string instrument parts and send them to another program that will return an harmonic analysis. Citation: quote a passage from an encoded music notation file. For example the timpani in the opening bars of the Overture to Mozart’s Don Giovanni. Highlighting: address a segment of music notation to highlight it in a visual context (e.g. with color).

The EMA project has particularly focused on facilitating citation and attribution of credit, as is discussed in the “Evaluation” section below.

A brief overview of the specification

The specification was created to provide a web-friendly mechanism for addressing specific portions of music notation in digital format. This is not unlike the APIs often provided by image servers for retrieving specific portions of an image. Such servers typically operate on a given large image file and are able to return different zoom levels and coordinate spaces. The International Image Interoperability Framework (IIIF) has recently created an API to generalize interaction with image providers, so that it can be implemented across multiple servers and digital libraries. IIIF was used as a model for the Music Addressability API created for EMA and briefly described here.

Consider the following example, Taken from Du Chemin: Lost Voices project, at http://digitalduchemin.org. and the notation highlighted in the boxes:

The highlighted notation occurs between measure 38 and 39, on the first and third staves (labelled Superius and Tenor — this is a renaissance choral piece). Measure 38, however, is not considered in full, but only starting from the third beat. This selection can be expressed according to a URI syntax:

/{identifier}/{measures}/{staves}/{beats}/

/dc0519.mei/38-39/1,3/@3-3

The measure is expressed as a range (38-39), staves can be selected through a range or separately with a comma (1,3), and the beats are always relative to their measure, so @3-3 means the third beat of the starting measure to the third beat of the ending measure. A complete description of the URI scheme and the API is available at: . In this specification the beat is the primary driver of the selection: it allows for precise addressability of contiguous as well as non-contiguous areas.

Music notation, however, occasionally breaks rules in favor of flexibility. Cadenzas, for example, are ornamental passages of an improvisational nature that can be written out with notation that disregards a measure’s beat, making it impossible to address subsets of the cadenza wit the syntax discussed above. While EMA’s URI scheme offers the granularity sufficient to address the vast majority of western music notation, a necessary future improvement on the API is, indeed, an extension that would make it possible to address music notation with more flexible beat.

Evaluation

In order to evaluate the specification, EMA has created an implementation of the API as a web service. While the URI specification can be absolute from a specific representation, the implementation must know how to operate on specific formats. The web service that we coded operates on the The Music Encoding Initiative format and is called Open MEI Addressability Service (Omas). A demo is available at . Omas interprets a conformant URI, retrieves the specified MEI resource, applies the selection, and returns it. An additional parameter on the URI can be used to determine how “complete” the retrieved selection should be (whether it should, for example, include time and key signatures, etc.).

Similarly to an image server, Omas assumes that the information specified by the URL can be retrieved in the target MEI file. If requested, the web service can return metadata information about an MEI file, such as number of measures, staves, beats and their changes throughout the document. This can be used to facilitate the creation of URL requests able to return the selection required.

Finally, EMA partnered with the Du Chemin: Lost Voices project to model a number of micro-analyses addressing music notation from their existing collection of MEI documents. In a second phase of the project, the analyses have been re-modeled as Linked Open Data according to the Nanopublication guidelines. Nanopublication is an ontology for publishing scientific data: http://nanopub.org. The Nanopublication server for Du Chemin: Lost Voices is available at: . Each EMA nanopublication addresses an arbitrary portion of music notation using the URL specification described here. Omas operates as a web service to connect the nanopublications with the collection of MEI files in Du Chemin.

Bibliography Babbit, M. (1965). The use of computers in musicological research. Perspectives of New Music, 3(2): pp. 74–83. Flanders, J. (2009). Data and Wisdom: Electronic Editing and the Quantification of Knowledge. Literary and Linguistic Computing, 24(1): pp. 53–62. McCarty, W. (2005). Chapter 1 - Modelling. Humanities Computing, London: Palgrave Macmillan. Witmore, M. (2010). Text: A Massively Addressable Object. .