Archaic inscribed epigram often allows an object to speak in a bold first person. Noting this, Jesper Svenbro observes that the ancient reader must have had no countervailing “conviction that the first person necessarily implies an inner life and voice” (1993: 42). I have often found it necessary to repeat this observation to my students, as they immediately assume that even such simple statements as “Iphidice dedicated me to Athena” (CEG 198) imply an assumed mind behind the voice. In my twenty-minute paper, however, I pose the question: what if my students are right? My paper breaks down this question into two parts. First, to what extent did the ancient reader grant personhood to an inscribed “speaking object,” treating it more or less as one would any other interlocutor? And second, how closely should we connect this grant of personhood to a mind—to an artificial intelligence?
In addressing the first question, I apply the findings of social presence theory. This theory was pioneered in the 1970s to quantify the attenuated presence of people encountered through telecommunication (Short, Williams, and Christie 1976). To what degree, the theory asked, did a person seeing (for example) a newscaster on television, consider that person to be present to him or her? And what factors affected that impression of presence? Today this theory is applied to situations where no real person may be involved, such as multiplayer computer gaming, which incorporates both digitally mediated representations of networked players (or “avatars”) and computer-simulated characters with no human operator (or “bots”). Psychological researchers attempt to determine the extent to which players feel that they are “with another person” in the game (e.g. Biocca, Harms, and Burgoon 2003, Hudson and Cairns 2014). By applying their conclusions to the ancient evidence, my paper finds that, while the ancients would not have answered “yes” to the direct question “is this object a person?” their behavior when confronted with such objects likely indicated a tacit assent.
Finding an explicit mind behind a grant of personhood is more difficult: all but the most rigorous thinkers simply assume that the two inevitably go together, and even some who probe the issue directly tend to fall back on “rules of thumb.” In fact, both Plato (Phaedrus 275d4–e5) and Alan Turing (1950: 433–434) appear to use the same standard to diagnose the presence of intelligence: does the thing in question apparently respond interactively to questioning by an ordinary interlocutor? Given this agreement, we have reason to believe that the results of modern AI experiments may have trans-historical implications. By examining a few “near-misses” of Turing’s standards, my paper will demonstrate that, even in the past fifty years, the popular understanding of intelligence has evolved substantially, as people tend to attribute intelligence to any device that exceeds their prior experience with non-human devices. In short, people do not know what intelligence is, but only what it isn’t: it isn’t the things they’ve seen before. This finding allows us to understand why Plato’s “near-miss”—a painting or a written text—is beneath consideration today: we know devices that can do better. But the ancients, who started from a lower bar, probably would have conceded intelligence to works of art, or to texts, if those things represented a substantial departure from past works.
My paper then makes a brief attempt to apply these findings, by generating a rough rubric from the results of modern research, and using it to evaluate a few inscribed epigrams. We will see that their use of the first person is not simply conventional, but part of a strategy to create an impression of personhood, and perhaps even of mind. Both we and the ancients, then, had a desire to connect to a “thinking machine” (Turing 1950: 434)—and both of us have thought that we were on the cusp of inventing one.
Linguistic Strategies and the Hermeneutics of Reading