Article published in
Originally drafted in XML
The word the art or science
of interpretation, esp. of Scripture. Commonly dist. from exegesis or practical exposition.
The lexicographer who composed that entry seems to have been more than usually cautious — are
we dealing with an art or a science here? Is hermeneutics really best defined by contrast, as
something inherently impractical (unlike, presumably, the eminently practical pursuit of
biblical exegesis)? The meaning of a word is often more than can be summed up by a
lexicographer, however cautious or eminent. Learned words like this one frequently carry with
them senses more directly derived from their etymological roots, and this one is no
exception.
In Greek myth, Hermes was the mediator responsible for explaining the messages of the gods to
mere mortals, hence the Greek word
What is incomprehensible for a secular rationalist is often revelatory for a mystic, and it
is surely no coincidence therefore that we find a quasi-mystical paradox at the heart of much
thinking about hermeneutics: we read, for example, of the
Viewed historically, the focus of hermeneutics shifts from the divine to the secular, in tandem with that of society as a whole. If the business of hermeneutics for the 18th century and beyond was largely confined to biblical exegesis, under the influence of theorists such as Schleirmacher and Dilthey, it has become by the end of the 19th century the basis of a general methodology for the humanities.
In attempts to explicate how interpretations
This is surely because, in the post-modern and post-structuralist world, hermeneutics has
become a key part of what might be termed cultural cognition. Cultural objects do not simply
require an interpretation, it is the act of bestowing that interpretation which validates their
status as cultural objects in the first place. Texts, and other artefacts alike, are invested
with meaning by our use of them, and it is therefore interpretation alone which confers value
on them. Small wonder that Derrida, citing Montaigne, takes it as self-evident that We need
to interpret interpretations more than to interpret things
.
If hermeneutics is the study of interpretation itself, it seems useful to investigate the goals of that process. What is the object of the hermeneutic process? Many of those goals seem to have been discredited by current thinking: we no longer see the objective of our analysis as being to uncover an eternal verity. In the more restricted world of literary criticism, we have seen such goals as the establishment of authorial intention, of the original authentic context, or the effect on an ideal reader, all become increasingly unfashionable. This is only partly because the observer effect (still a novelty in the sciences but central to the humanities) begs the question as to whose interpretation it is we are seeking to apply. There is ample evidence that not all interpretations are equally useful or have equal explanatory force, yet on what grounds do we decide which interpretations can be disregarded?
The British semiotician Daniel Chandler remarks somewhere: We cannot write
This is an
aim which tends to distinguish social science from such arts as literary criticism. Although
most literary critics of my acquaintance might probably wish to question the implied slur of
the latter sentence, they would generally endorse the force of the former.
It is an odd characteristic of the way we currently deal with our written heritage that, despite the debunking of author-ity, the canon remains alive and well in the marketplace. We may no longer identify literature exclusively with the production of dead white European males; instead however we create new canonical collections, of pop culture, of Afro-American literature, of women's writing. Canonicity itself, the desire to catch the whole of some class of valued cultural phenomena often defined by exclusion, seems inescapable.
A curious enthusiasm surrounds us for reconstructions of imagined past times, whether in the
fad for music performed on
A vigorous scepticism thus remains necessary. We would do well to bear in mind the
etymological connection between the words
The hermeneutic act thus seems to have a crucial role in mediating and determining our experience of cultural objects. In selecting interpretations of such objects, we seek to explain those others who created them but also to explain ourselves and our tangled reactions to them. In this complex business, hermeneutics has an important social function, not simply in broadening and enriching individual experience of the world, but also in motivating social coherence and social change. It is at the very heart of humanism, and of human society.
One suggestive insight gained by investigating the difference between speech and writing
seems to be the extent to which both forms of text seem to depend on semiotic systems beyond
their immediate constituents. In speech, contextual features such as the relationship of the
speakers to each other or their surroundings have at least as important an explicative role as
what they actually say. In writing, the physical appearance of a text, the medium by which it
is presented, and its audience's expectations of such forms are of equally great significance.
(Some have even famously asserted that themedium is the message
). It is not for purely
technical reasons therefore that we require of scholarship an understanding of the relationship
between the technologies of text and their application, as well as the historical results of
that process.
In this section I focus on the semiotic aspects of hermeneutics, in the specific field of text encoding. I begin with a brief attempt to identify key characteristics of the coding systems associated with texts, whether these may be said to exist within, behind, or amongst, texts.
It seems self evident that a text has at least three major axes along which we may attempt to
analyse it, and thus at least three interlocking semiotic systems. A text is simultaneously an
image (which may be transferred from one physical instance to another, by various imaging
techniques); a linguistic construct (which may equally be encoded using different modalities,
as when a written text is performed); and an information structure (it has semantic content
relating to a perception of the world at large). It may be noteworthy that these three
dimensions seem also to be reflected in three different kinds of software: word processing
software focussing on the appearance of text, text retrieval software focussing on its
linguistic components, and database systems focussing on its
Texts and their meanings are not however to be constrained by the capabilities of software.
They remain defiantly both linguistic and physical objects; their formal organisation may seem
to be linear but is generally not, being characterised by multiple hierarchic structures and
interlinked components. Moreover, as cultural objects, they are at once products of and
definers of specific
The scope and variety of the encoding systems we need to envisage in developing a unified account of the way that hermeneutics works in texts may thus seem very large indeed. The claim of this paper is, however, that a unified approach remains feasible. As an example, we consider a much studied piece of parchment, sometimes known as MS Cotton Vitellius A xv, a rather poor representation of the start of which appears below:
What exactly is going on when we process this image, when we make an interpretation of it? Clearly, there is a mapping process in which the various visual signals here are classified as either irrelevant noise or as signifiers of some kind — as letters, punctuation, decoration, and so on. A scholarly reading goes further, identifying not just discrete letter forms, but also forms which appear to be discrete but are in fact common variants of each other (such as upper and lower case, italic and bold, etc). Structural signifiers — the use of white space between words, in this example — must also be identified. Not to labour the obvious, it is interesting to note that in a printed or written text the mapping between signifier and signified is fixed and conventional: though it may become inaccessible or misunderstood, it is not inherently flexible.
Here then, is one fixed reading of the above text (based on Wrenn's edition of 1953):
In this printed rendition, white space and lineation are used to flag explicitly the
boundaries of metrical units (lines, stanzas, and even the hemistiches of Old English verse)
not actually explicit in the manuscript. These units are the result of an act of
interpretation; they both represent and determine a particular reading. The particular mapping
chosen for each visual signal is informed by expectation, convention, and often somewhat arcane
knowledge: we call this
For example, we might choose to encode the manuscript lines above in either of the two
following ways (amongst many others)
In this version, letter forms are normalised, by means of entity references where necessary,
and spacing is silently normalised. Most strikingly however, the metrical structure is made
explicit by the addition of tags which mark the boundaries of verse lines and stanzas. Much
information about the original lineation and rendition is lost, but much information not
explicitly present in the original is added. Contrast this with the following encoding:&H;&wynn;æt we garde na
in gear-dagum þeod cyninga
þrym ge frunon huða
æþelinga&s; ellen
fremedon. oft scyld scefing
sceaþe
þreatum, moneg
of
feasceaft funden… ]]>
In this second version, a rather different set of decisions has been taken. Again, the
individual characters and interword spacing have been normalised, though the linguistically
invisible space between
Neither of these digital versions is in any sense
The term
We may use it to describe the process by which individual components of a writing or other
scheme are represented, and for the simple reduction to linear form which digital recording
requires. We can also use it for the more obvious acts of representing structure and
appearance, whether original or intended. Markup is also able to represent characterisations
such as analysis, interpretation, the
Some typical compositional features include the formal structure of a text — its constituent sections, chapters, headings etc., as well as its linguistic structure — its constituent sentences, clauses, words, morphemes etc. From a different perspective, we might identify as compositional features the components of a text's discourse structure — its exchanges, moves, acts, etc. A third view concerns itself more with the ontological status of a text's composition: its constituent revisions, deletions, additions etc., or its history as a shifting nexus of discrete fragments.
Some typical contextual features include a consideration of the agencies by which a text came into being or is identified as such (its author, title, publisher…) and of the situation in which it is experienced (the intended or actual audience, the mode of performance itself, the predefined category of text to which it explicitly or implicitly belongs…). Some may be identifiable only externally (its subject, text-type, mode), while others are internal (size, encoding, revision status)
Some typical interpretive features include linguistic properties such as morpho-syntactic classifications, lemmatisation, sense-disambiguation, identification of particular semantic or discourse features, and in general all kinds of annotation and commentary, for example associating passages in one text with passages in another, or citing instances of a more abstract knowledge structure.
Despite the convenience of this kind of triage, it has to be stressed that at bottom all
markup is interpretive. In most encoded texts, features of all three kinds typically co-occur.
For example, the emendation of
It now should be apparent why the availability of a single encoding scheme, a unified semiotic system, is of such importance to the emerging discipline of digital transcription. By using a single formalism we reduce the complexity inherent in representing the interconnectedness of all aspects of our hermeneutic analysis, and thus facilitate a polyvalent analysis.
Markup has, however, another function, in some ways a more critical one. By making explicit a theory about some aspect of a document, markup maps a (human) interpretation of the text into a set of codes on which computer processing can be performed. It thus enables us to record human interpretations in a mechanically shareable way. The availability of large language corpora enables us to improve on impressionistic intuition about the behaviour of language users with reference to something larger than individual experience. In rather the same way, the availability of encoded textual interpretations can make explicit, and thus shareable, a critical consensus about the status of any of the textual features discussed in the previous section for a given text or set of texts. It provides an interlingua for the sharing of interpretations, an accessible hermetic code.
If we see digitised and encoded texts as nothing less than the vehicle by which the scholarly tradition is to be maintained, questions of digital preservation take on a more than esoteric technical interest. And even here, in the world of archival stores and long term digital archiving, a consideration of hermeneutic theory is necessary. The continuity of comprehension on which scholarship depends implies, necessitates indeed, a continuity in the availability of digitally stored information.
Digital media, however, are notoriously short lived, as anyone who has ever tried to rescue
last year's floppy disk knows. To ensure that data stored on such media remains usable, it must
be periodically
In that last phrase, however, there lurks a catch. Digital media suffer not only from
physical decay but also from technical obsolescence. The bits on a disk may have been preserved
perfectly but if a computer environment (software and hardware) no longer exists capable of
processing them, they are so much noise. Computer environments have changed out of all
recognition during the last few years and show no sign of stabilising at any point in the
future. To ensure that digital data remains comprehensible therefore, simple refreshment of its
media is not enough. Instead the data must periodically be
Where digital encoding techniques may perhaps have an advantage over other forms of encoding information is in their clear separation of markup and content. As we have seen, the markup of a printed or written text may be expressed using a whole range of conventions and expectations, often not even physically explicit (and therefore not preservable) in it. By contrast, the markup of an electronic text may be carried out using a single semiotic system in which any aspect of its interpretation can be made explicit, and therefore preservable. If, moreover, this markup uses as metalanguage some scheme which is independent of any particular machine environment (for example international standards such as SGML, XML, or ASN1), the migration problem is reduced to preservation only of the metalanguage used to describe the markup rather than of all its possible applications.
Far from being peripheral or in opposition to the humanistic endeavour, text encoding and markup are central to it. Text encoding provides us with a single semiotic system for expressing the huge variety of scholarly knowledge now at our disposal, through which, by means of which, and in spite of which, our cultural tradition persists. Text markup is currently the best tool at our disposal for ensuring that the hermeneutic circle continues to turn, that our cultural tradition endures.