American Council of Learned Societies
Occasional Paper No. 41


Computing and the Humanities:
Summary of a Roundtable Meeting


II. TOWARD A COMMON LANUGAGE:
METHODS AND CONTEXT


Visions of the Future

Computer scientists and humanists often start with differing views of how computing should be developed for humanities applications. This contrast can be illustrated at the desktop, where most humanists work. Noting the fragmentation of information resources across physical and electronic sources, Jerome Saltzer proposed a goal

. . . of a digital library world, where every document in the library is something that I can inspect from my desktop. I am not looking for something elaborate. All I want is the ability to look at documents. The other goal for that future system that I think is within reach is that whenever you spot something in a document that makes a reference to something else, it ought to be possible to click on it and have that appear in the next window.

Joseph Busch argued that not all information may be appropriate for desktop delivery.

I am a great believer that this is not something that everybody should do on their desktop. Not everybody should be an information server or is a good information server. I think we need to have service providers: not necessarily commercial service providers, but we have to understand the need for an institutional role of being a service provider.

What services are provided by whom and how services are provided can have a fundamental impact on both system design and institutional roles.

Effective system design can be an elusive goal. Michael Brodie pointed to systems failures in banking, telecommunications, and other commercial sectors as evidence of the difficulty of moving from a theoretical vision to workable systems design and implementation. A first step is to understand the "humanities process," analogous to various business processes.

What is the method by which one establishes a premise and comes to a conclusion by interacting with other people? You study that process and translate it into the appropriate information technology requirements and build it. If you build a vague thing to support a general notion, it is very unlikely that you will succeed unless by some wonderful happenstance. . . . Run rampant in creating the vision but do justice to the resources that you request in order to achieve it.

Collaboration, Brodie suggested, should extend to development of the ontologies that drive automation processes.

Much discussion of information technology focuses on information elements. But many also emphasize a vision of the future that more fully engages users as social animals, as individuals with distinct personalities. Computing and communications technologies have different merits, and considering them as a set brings with it the risk of missing important differences in their value to people. Mary Shaw suggested that contrasting the newer Web with older newsgroups can help to understand some of those differences.

The World Wide Web is one of the most sterile ways for people to interact with each other because the central human interaction has been washed out. As originally created, it was a way of pointing at some document out in cyberspace and saying, "I wish to see that document now." It has been larded about with mechanisms that let the documents animate themselves, but there is still not very much that lets you interact with real people. A much older use of the Internet involves electronic mail and newsgroups, which allow communities to develop around a shared interest in a shared topic.

An important piece of technology differentiates the newsgroups from the World Wide Web and makes the information genuinely accessible to people in forms that they understand. Historically we have concentrated on stringing wires and making bits go faster. We had to do that to get started. But we now need to devote energy to making the information capabilities accessible to real people in the terms that they can understand and control rather than just making things go faster with more connectivity, more people, more kinds of representations. Technologists need to rebalance the investment in the underlying network and the investment in packaging the capability and the form that reaches the intended audience.

The idea of active use and involvement with a medium as the key to its potential was amplified by Willard McCarty and Richard Liebhaber. Liebhaber argued that such potential implies a radical change in what people do with evolving technology.

Contextually, we are still focused on the written and statistical, whereas the movement is toward visual and virtual. We are working methodologically in a world of "store and forward," and we are moving toward a world of "forward and store." Our methodologies and our view of the technology in question is based on storing material somewhere and forwarding it to a user or a researcher. I believe technology and price, performance, and some of the other issues we discussed are leading us to a world of "forward and store." That is, instead of making it narrow and real-time, I am going to make it fatter and less than real-time, put it into a device and deal with it visually and virtually. I will not be connected to anything that is physical. What troubles me even more is that I believe the research and the real contextual thought that is going on to make those two changes happen is not being done in an academic environment, but in the commercial and industrial environment.

Liebhaber explained that progress in computing technology performance and associated reductions in cost will allow the movement of functionality from work space into play space and, ultimately, households. Trends also support a repositorial view of information: together with the economics supporting greater connectivity, this may broaden the potential for human studies and human interaction. Intellectual property becomes more important as the barriers from costly transport and storage and easy control of ideas and materials fall.

Imagine a world in which each of us had the opportunity to create for ourselves our own special interest magazines, which change by time of year or phase of life or whims of interest. Imagine a world in which instead of reading about the world of Thomas Aquinas, we download a series of images of Thomas Aquinas.

Liebhaber urged humanists to plan for novel system capabilities and designs, referring to his shift from store/forward to forward/store. Part of this trend involves recognizing and planning for growth in the consuming public, for whom the expansion of do-it-yourself options with automation extends to the finding, publishing, and repositorial handling of humanities material.

The coupling of technology and process is mediated by economics: people choose what they can afford. Responding to Edward Ayers' discussion of the potential of electronic media to support "democratic" access to humanities content, Charles Henry cautioned that humanists may have to work against the rhetoric of the National Information Infrastructure,2 which tends to be commercial in its metaphors—pipelines, conduits, highways, and toll booths as "things that you can direct and meter." Humanists find little comfort in computer scientists' observations about emerging technology for fine-tuning charging schemes. Their experiences will, in any event, be shaped by the larger set of non-technical factors that govern intellectual property rights and associated public policy.

Jerome Saltzer observed that computer scientists themselves may not understand all of the potential of information technology.

Things that cost $100,000 in 1983 we can do for $100 today. . . . Most computer scientists are still reacting to this factor of a thousand that has come upon them. . . . They are still trying to figure out how to unwind all of the decisions that were made ten years ago based on things that cost 100 times as much, and get into step with today's costs. They are not looking ahead to see that the things that cost $100 today are soon going to cost $1.00. So when humanists talk to computer scientists, you have to realize that most computer scientists are, in a funny way, tied up in the past.

Even scientists have difficulty coping with the rapid change in computing technology.

Universes of Discourse

Overcoming language differences was a major theme of the roundtable: all agreed that it is an issue for any cross-disciplinary collaboration. Pragmatically, Edward Ayers pointed to the rise of a pidgin language that spans computing and other disciplines. Michael Joyce captured the challenge eloquently in an opening statement using unusual words to illustrate the challenge of building a common language (See Appendix D).

Willard McCarty observed that the homogenizing influence of information technology on methodology provides a basis for a common language.

What jumps immediately into focus after five years of teaching humanities computing to graduate students at Toronto, and now undergraduates and postgraduates in London, is the importance of methodologies. When you teach humanities computing what immediately becomes obvious is that the only subject you have to talk about is the methodology. You find that this is a great deal to talk about. There is a huge and very rich interdisciplinary common ground of techniques relating to the data that people in the humanities deal with, whatever their particular disciplinary orientation. There are common tools and techniques. One can easily take, for example, relational database management and highlight examples in various disciplines and have people recognize the utility of these things. But much more important than that is the new perspective on these disciplines that these tools bring about. Here I have the very strong suspicion that this transcends the humanities and has to do with what has been going on in the sciences. We do not know very much about this yet. . . . We can get to this common language by seeing what "falls out" from the application of computing to the humanities during the fifty years since Roberto Busa started his project and in the sciences.

Methodology can provide one of the spaces where humanists and computer scientists meet. Summarizing his description of a variety of "potentiated spaces" created by humanists using computers, Joyce suggested that humanist and computer science methodologies have a reciprocal relationship: "our presence as human persons in real places continues as a value not despite but because of the ubiquity of virtual spaces."

Collaboration: "Synching Up"

Effective collaborations may be serendipitous, but they are often not accidental. A critical element of any collaboration is mutual respect, and that element is especially important in collaboration across disciplines. Individual disciplines tend to be chauvinistic, and disciplines that generate useful tools—including computer science—are concerned to have their contributions appreciated as such. Willard McCarty addressed that issue:

"Computing in the humanities" is not quite right, because the humanities do not own computing and should not swallow it up. We have to think much more in terms of a common language. The phrase "common languages," as George Steiner recognized, refers to the state of bliss before battle when everyone speaks one tongue. But we have suffered this fall into disciplinary specialization, and we have these separate city-states. . . . Perhaps a better analogy is the perspective of the Phoenician merchants who invented the alphabet—of people moving between civilizations needing to invent a meta-device, a meta-language, if you will, to represent the goods and services that they were trading to people speaking incompatible languages. . . . "Computing in the humanities" is not right because it suggests that computing moves into the disciplines and becomes absorbed in them. So you have a kind of Marxist theory of the withering away of the state and finally you have all of these disciplines with each of their computing experts in them. This is not the way to do things, because it ignores the fundamental contribution of the computer to the interdisciplinary dialogue, the fact that there is all of this material held in common in these techniques and approaches.

Although the roundtable was labeled "computing and the humanities" in recognition of the concerns McCarty raised, he cautioned against inferring from that wording that computing and the humanities are fundamentally separate. That inference, he explained, is "an illusion caused by a lack of historical perspective and perpetuated in the discipline-based structure of our institutions."

Stanley N. Katz underscored the challenge of transcending discipline-based structures, noting the mix of strength and constraint that they imply:

Probably everyone around this table would raise the banner of interdisciplinarity or of multidisciplinarity—or I like to talk about it as "nondisciplinarity." But probably none of us, if we are thoughtful, wants to give up disciplinarity. Method is important. It is enormously important. We do not know how to maintain a structure in which we have the virtues of both method and nondisciplinarity. It seems to me to be an enormously important challenge and very relevant to the kind of collaboration that we are talking about here.

McCarty proposed looking more broadly at "humane learning, which includes the sciences." He suggested that computing from a humanistic point of view addresses how people think.

The computer, from a humanist's perspective, and I think from several others, is essentially a modeling device. That is, through it we determine or we play with how scholars think. And the interesting question that arises is the discrepancy between how the computer manages to do things and how human beings manage to do things. This generates many questions of interest to humanists because it has intimately to do with how the research perspective changes once you begin computing your texts, images, sounds, and other forms of humanities data.

McCarty noted that scholars who shaped computing—for example, Alan Turing and Vannevar Bush—used models from the cognitive science branches of philosophy, neuropsychiatry, and other fields.

Many pioneering applications in the humanities have been documented, and several are accessible on the World Wide Web. (See Appendix E.) To illustrate emerging possibilities, some of which relate directly to the challenge of understanding and exploiting how people think, Mary Shaw presented examples of cross-disciplinary collaboration among computing, other sciences, and the arts. The Journey into the Living Cell project of the Carnegie Mellon University Studio for Creative Inquiry, which draws on the Fine Arts Department and Carnegie Science Center, was motivated by artists, who saw an opportunity to reach a new audience with a new medium. An interactive, visual system developed by biologists and artists, it involves the audience as collaborators in a performance. Another interactive performance project, associated with CMU's digital library project, involves an interactive representation of Albert Einstein using a script performed in different orders depending on the audience. Elements of this project include speech recognition and synthesis, information retrieval, and digital video. And Shaw's own work includes an education application that includes simulations, record keeping, and documents. She is collaborating with cognitive psychologists to understand how learning takes place.

Shaw derived from her and others' experiences some principles for effective collaboration that were echoed throughout the roundtable discussions. Foremost is for people to have some common cause, such as working on a particular project. This can generate positive feedback (drawing in students, colleagues, and follow-on activities). Participants must be full partners in order to reconcile different points of view along with different research styles, languages, and cultures. And institutions must find ways to remove administrative barriers (allocation of contract overhead, turf battles over income) in order to create the conditions for collaborations and for opportunities to be seized spontaneously when they arise. Edward Fox reinforced that point, noting that collaboration "cannot be marginalized," with humanists outside of a science project. He suggested that this had been an issue with the federal Digital Library Initiative (DLI).3 Shaw commented more broadly that from the top down, institutions can only enhance or interfere.

Thomas DeFanti echoed Shaw by describing how partnerships between computer scientists and artists created the subdiscipline of computer graphics. He noted that SIGGRAPH conferences are distinguished from other computer meetings by the participation of artists, illustrators, and other visual workers, male and female.4 Part of this mixture reflects the aesthetic imperative: "early on during the growth period it was clear that if we were going to promote this technology it had to look good." DeFanti's current lab engages artists as project managers because they surpass scientists and engineers in driving the software to commercialization. These artists have scientific training; they are "a very cross-cultural bunch of people who are actually generating the technology."

Sandria Freitag suggested that the seemingly trivial frustrations about converting to different software may express a more fundamental concern about the match of technology to process: "Many people were right: WordPerfect is a much better package to work with than Word if you are writing." Humanists often perceive limited ability to shape the technology they use. Concern grows "from that very simple level to the much more complicated level of how do you harness the technology that can deal with the visual and the virtual." She remarked on the tensions revealed in the roundtable discussions—"between the notion that there is a technological expertise out there that can be leveraged on behalf of the humanities; and the other side of the issue, which is that the communities themselves have to structure these things, have to create them to address their own needs."

Freitag and others noted the emergence of a new mindset: a more subtle and profound transformation of what humanists do, and how that results from merely incremental improvements to the usability of any particular system. Michael Neuman cautioned against generalizing that phenomenon, at least in the near future. Attitudes about technology vary across the humanities. It may not be simply a matter of time before all humanists embrace computing tools and methodologies; it may not be possible to reach the entire community with any single project. But Willard McCarty suggested that broad forces shaping education impel accelerated attention to computing in the humanities.

Students come to us with the computer in their mental vocabulary as a kind of model for how thinking takes place. It is extremely important that humanists deal with computing in order to carry out their mission. We have no choice about this whatsoever. We also need to communicate with the public since we are no longer swabbed in superstitious reverence for what we do, as was true in my parents' generation.

Humanists, explained Edward Ayers, have been trained in "how to deal with one kind of technology, the book," although it has taken centuries to master that technology. The more diverse and heterogeneous world of electronic information poses major challenges for instructors at all levels. The strong links between different kinds of humanities activities and education motivate interest in information technology as amplifying agents as well as interest in options for disseminating information about projects and systems with educational value.

Bruce Schatz noted that even in scientific contexts, professors and graduate students show different preferences for software and systems. Drawing from his experience with genome analysis systems, Schatz asked:

Would you like to have something that is used by professionals in the field now? You need to find something that they really want, that is so important to their lives that they will spend time using the systems that do not actually work. Or do you want something that is used by graduate students in scientific areas or undergraduates in teaching areas that does not really do anything useful but illustrates to them what the world will be like when they are professionals? Most of the high-end research systems are very, very popular among the graduate students because they know that that is what the world will be like five or ten years from now. But professors and professionals tend not to care because they can see that their immediate problem can be done more easily by some simpler mechanism. So you have to think about what kind of project you want to have before you start; that determines almost all of the other choices.

Technology holds the promise of alleviating some of the problems of change that it creates. For example, Edward Ayers described how his center uses a model in which "an individual project has to be the carrier of its own set of instructions, somewhat like DNA; it has to be able to tell people how to use it at the same time." More generally, Tom DeFanti noted the impetus from distance learning:

If distance learning is going to happen, it will need everything that you are talking about in order for it to be successful. I think that it is going to happen because the technology will be there. How long it takes, I have no idea. How long it will be stalled, I have no idea.


Contents
I. Introduction and Background
II. TOWARD A COMMON LANGUAGE: METHODS AND CONTEXT
III. Software and Standards Development
IV. Economic and Institutional Issues
V. Next Steps: Talk First to Select Actions Better
Notes | Appendices

Back to Top