This article was originally published in Amphora 12.1. It has been edited slightly to adhere to current SCS blog conventions. All links are active, however, some information such as pricing may have changed.
As the tools and methods for creating 3D models of sites and objects become less expensive, archaeologists are increasingly putting them to good use in the field. This article focuses on my collaborative work to scan objects found at the site of Kenchreai in Greece and now stored nearby in the Isthmia Museum. It does cover practical issues and one goal of writing this piece is to encourage others to explore the creation of 3D content. Accordingly, I stress that 3D tools are becoming easier to use, not just less expensive. And it will be as important to think about what to do with these models after they are made. Permanent access to 3D models is a goal and initial steps towards that are described below. Likewise, rich linking of information about scanned objects to descriptions of their original archaeological findspot will further encourage contextualized studies of Greek and Roman material culture.
As 3D content becomes available on the internet, new approaches both to teaching and research will be enabled. This is particularly the case as virtual technologies move into consumer products, which is a development clearly seen in news coverage of relatively inexpensive virtual-reality headsets such as Microsoft’s HoloLens and FaceBook’s Oculus system. If immersive experiences are coming, classicists can prepare by creating materials that represent the cultures we study. Within this broad context, an underlying theme of the following discussion is that all members of the SCS community can choose to engage with the opportunities that three-dimensional renderings of the ancient Mediterranean world offer.
In recent years, a workflow that involves taking many photographs and processing them into a 3D model of a real-world object or scene has gained in both mind-share and actual results. This approach uses the overlap between photos in a set to calculate the position and shape of objects. That overlap can be discovered automatically and the resulting model has a realistic appearance and can serve as a useful surrogate for the original. Many practical examples and good discussions of photo-based modeling appear in the recent volume Visions of Substance: 3D Imaging in Mediterranean Archaeology, edited by W. Caraher and B. Olson and freely available in PDF form. The work I describe here builds on themes developed in my contribution to that volume.
A major advantage of the photo based approach is cost. Many archaeologists working in the Mediterranean and elsewhere are using Agisoft Photoscan, which is available for an educational price of $59.00, though other solutions exist and there are more expensive versions of Photoscan as well.
The major disadvantage of using photographs is the time it takes. Taking the photographs can be an involved process, and processing those photographs can take days at worst, hours at best. And while the workflow is very automated in parts, with no intervention needed for software to calculate the relationship between photographs, the selection of which specific photographs to use is often an iterative process. Particularly when it comes to modeling objects and small features, which is my area of focus, the first run will indicate which parts of an object came out well and also highlight photographs that are interfering with the calculation of good geometry. Bright lights in photographs often need to be masked so that they are ignored; softly focused photographs need to be excluded. After such adjustments, one re-runs the process, perhaps not from the start, but again, hours can pass by with only slow progress towards the end result.
This season at Kenchreai I explored the current leading edge of low-cost hardware that is beginning to bridge the gap between expensive devices that work quickly and the slower photographic process mentioned above. What follows is timely in that developments in this field are coming rapidly. I was in Greece in late May and early June of 2015 using an iPad-attached Structure Scanner made by Occipital Corp., a device that began as a Kickstarter project. The scanner itself began shipping in March, 2015, with a list price of $499, including relevant software. It works with recent-model iPads that a field project or individual might already own; I used it with a 64 gigabyte iPad mini 3. Both the scanner and the iPad were purchased with faculty research funds provided by the Institute for the Study of the Ancient World at New York University.
Research at the archaeological site of Kenchreai for over half a century has produced a wealth of artifacts and architecture that will benefit greatly from documentation in 3D. At the site itself, the majority of the known architectural remains are Roman in date and were in use following the likely Augustan construction of massive artificial breakwaters that extend from shore. These turned a small curved beach into a well sheltered harbor with an excellent anchorage and wharfs. Intensive, systematic excavation began at Kenchreai in 1963. That early phase of the project is well known for the discovery of fourth-century opus sectile glass panels depicting, among other figures and scenes, the poet Homer and the philosopher Plato.
Current research at Kenchreai is conducted with the permission of the Greek Ministry of Culture and Tourism under the auspices of the American School of Classical Studies at Athens. Professor Joseph L. Rife of Vanderbilt University is the Director of the American Excavations at Kenchreai and Professor Jorge J. Bravo III of the University of Maryland, College Park, is the Co-Director. The directors and I are extremely grateful to the Corinthian Ephoreia and to the staff of the Isthmia Museum for their ongoing support of our research and field work.
Because this is a practical article, direct comparison of two models of the same piece—a Roman-period marble statue-base preserving two human feet to just above the ankles (Inventory number Ke 1221)—is useful. Figure 1 below shows the model made with Photoscan and its caption includes a link to a web-based version that readers can rotate using most modern browsers.
Figure 2 shows the model made with the iPad-attached Structure scanner.
Both of these figures were exported from the open-source 3D viewer and editor Meshlab, which is an essential part of any 3D practitioner’s software toolkit.
Readers are very much encouraged to “click through” and inspect the models themselves, but even the figures here show some of the advantages and deficiencies of each technique. The texture of the marble surface appears with much more detail in the photo based model. And the same is true when it comes to details of carving. For example, the fine delineation of the toenails is somewhat lost in the model made with the Structure scanner. And beyond these original details, note that the crack in the back foot is clearly visible in Figure 1 but in very soft focus in Figure 2. Views of each model from approximately the same perspective appear in Figure 3 and highlight the advantages of the photo-based method.
The second model does have its strengths however. Firstly, it is very “clean.” The Structure sensor directly measures distance and allows a virtual box to be defined. Anything outside that box is ignored. The photo-based approach does sometime have problems detecting edges, particularly in workspaces with strong lights and any reflective surfaces that create bright spots. These seem to always appear in the background as one photographs even medium-sized objects. Similarly, objects that are themselves too glossy or transparent, such as polished metal or glass, can resist good outcomes.
A further advantage of the Structure scanner is that its resulting models embed information about the real-world size of the objects represented. Many software applications, including Meshlab, are able to measure dimensions. Photo-based models cannot be referenced to real-world units unless a scale is included and the model is processed to take account of that information, a technique whose full explanation lies outside the scope of this brief discussion.
But the most compelling advantage of the Structure scanner is speed. The photo-based model was made with photographs I took in 2013. I shot 184 images in total and, again after some iteration, used 60 in making the model shown above. The work began with taking the photographs in the Isthmia Museum and then entailed processing them when back home. As a result, many days elapsed before I saw the final result. This one example was to some extent a worst-case scenario; sustained processing can produce a high-quality model by the morning after photographs are taken, particularly if staff are assigned to keep up the pace. Regardless, results are in no way instantaneous.
By way of comparison, let me set the scene for the making of the model in Figure 2. Like many archaeological projects, Kenchreai welcomes visiting colleagues. In 2015, Professor David Petrain of Hunter College, CUNY, and a specialist in Hellenistic poetry and Roman visual culture, joined the project for a short stay. He had never done any 3D modeling prior to joining me for a day of work with the Kenchreai artifacts. After a brief introduction, David scanned a few amphoras, and we then decided we would try a model of “the feet,” as we’ve come to call Ke 1221. I wanted to compare results with my previous efforts and to put myself in a position to share that comparison. All of this is to say that after a morning of practice and actual work, David was well prepared to try a slightly more ambitious scan. Not including the time spent moving the base so that it was in indirect natural light, the actual scan took under three minutes. We saw preliminary results on the iPad screen so we knew that he had succeeded. I did a scan myself as backup but what you see here is his version.
A powerful feature of the Structure scanner and iPad combination is the ability to see progress while scanning an object. Figure 4 shows an example of what the screen of our iPad displays while a scan is underway. In this image, Blaise Gratton, a recent graduate of Vanderbilt’s MA Program in Classics, is controlling the process. The specific software he is using is the Scandy iOS app, which is available for free from Apple’s App Store. The object in this case is a mid-first century CE Italian sigillata platter (Ke 518, a Conspectus form 18.2). Scandy is indicating success by showing the surface of the platter in gray. The red section is too close to the scanner, but stepping back would capture that as well. The process quickly becomes routine. Moving the iPad around the object causes the gray area to expand, and it is possible to move back to fill in detail, and also possible to move the iPad up and down to do the same. Tapping “Done” ends the process. The scanner does come with its own software, titled Skanect, but I found that to be less useful in that it is more complicated to set up and does not give such direct feedback.
Scandy is not, however, a perfect tool, and to be fair, does not present itself as intended for archaeological work. Its icon is a wrapped hard candy. When a scan is complete, the iPad displays the message “Applying Magic.” It seems that the magic is actually the processing of the raw data collected by the Structure scanner to reduce the resulting model to a more manageable size. This is unfortunate. Additionally, Scandy’s default mode of accessing models requires first uploading to its website and then downloading to a computer. This was often impractical given realities of Internet access so I used the OS X app Imazing to move models from the iPad. There is already considerable utility here, but also an opportunity for a developer to write an iOS app intended for high-quality scanning of many forms of cultural heritage. I have been in contact with Scandy’s developers and used a beta version that improved while I was in Greece. Such communication is important as archaeologists will not get the tools we need if we don’t make our requirements clear.
I have tried to communicate some of the practicalities of using the combination of iPad-attached Structure scanner and Scandy iOS app to make models of objects excavated over 50 years ago at Kenchreai. It is the case that one reason to try this setup is to stay aware of ongoing developments in the capture of 3D data. That is a short-term goal and I am optimistic that both hardware and software will get better reasonably quickly. Our long-term strategic goal is to make data available on the public internet. We have begun to upload a few models into the Kenchreai Archaeological Archive (KAA) and both the models discussed here are in that resource. See the page for inventoried object KE 1221. That web page in turn links to the original excavation notebook that records the excavation of this marble base. The notebook pages report that the base was found in a submerged structure on the south mole at Kenchreai. Not long after the base came to light, the opus sectile glass panels for which Kenchreai is most famous would be found in the same room. Linking 3D models to archaeological context in ways that allow more complete understanding of Kenchreai’s past by anyone who explores these resources is the chief reason to spend time scanning artifacts. As I indicated just above, I am optimistic and look forward to compelling results as my colleagues and I working at Kenchreai pursue new approaches.