Skip to main content

The Homer Multitext Project (HMT), Casey Dué and Mary Ebbott, editors, is a project of the Center for Hellenic Studies of Harvard University. This presentation will reflect on the project’s 16-year history with particular attention to the interrelationships between its philological aims, its social values, and evolving technological realities. Gregory Nagy coined the term “multitext” as early as 1994, and the HMT was founded on a desire to producee an “Edition” of Homeric Epic that might more completely capture the complexity of its tradition and illuminate its oral poetic origins. It was a “born digital” project explicitely aiming to capture and publish data and relationships among data that would be impossible in print.

The HMT was early to embrace the possibilities of editing from digital images of manuscripts, and was a leader in the ethos of publishing manuscript images under open licenses, making available under Creative Commons licences images of three Iliadic MSS from the Biblioteca Marciana, and two from the Real Monasterio de San Lorenzo de El Escorial, including (for some MSS.) 3-dimensional meshfiles of each folio, and multispectral imaging.

The complexity of the textual contents of a “Homer Multitext” were apparent at the start, leading to the first work on the Canonical Text Services Protocol, a new definition of “text” for a digital age, and a concise but semantically rich method of machine-actional citation of text, the CTS-URN, which is now widely used in digital Classics and other fields. The HMT’s approaches to integrating textual data with images and other kinds of data on occassion anticipated recent developments such as IIIF and OpenAnnotation, on many occassions profited from the work of other projects, and on some occassions proved to be badly mistaken.

The most important aspect of the project has been its absolute reliance on collaboration, initially among its editors, the Directors of the CHS, and its Project Architects, other professional classicists, and computer scientists. Starting in 2006 significant scholarly work on the project was in the hands of undergraduates; today the majority is. Over this decade, as (by now) hundreds of editors have contributed transcriptions, analyses, and commentary, the project has continued to explore the best technologies for fostering this collaboration and ensuring the rigor of the scholarship and the integrity of the data. These technologies began with strick workflows using notebooks of paper and colored pencils. Today, all HMT editors run on their personal computers a standardized Linux Virtual Machine affording pre-configured version control software, editing tools (some standard FOSS tools, some written for the HMT), and a suite of validation software that ensures the integrity of the data and the validity of the Greek.

The presentation will give attention to the (almost entirely positive) professional consequences, for the HMT’s principals, of long-term engagement with a project like this and will ask (not as a rhetorical question) whether this project is fortuitous and unique, or if it is reproducible and under what disciplinary circumstances. It will end with a brief statement of the current state of HMT data and plans for its future.