The Polyglot Manifesto II

There are, hence, two types of interlocutions required for the historian interested in future-ese. One within the discipline and one outside. One with the colleagues in Social Sciences, Physical Sciences and Computer Sciences and the other with the digital public. In both cases, the historian has to learn to speak another language – to be a polyglot – and communicate effectively. In both cases, the act of translation is key. I concede that the necessity of being a translator or a connector may not be apparent or deemed beneficial by many. Especially on the latter point – the speaking with the public point. Academics are, after all, pursuing our inner monologues in the bookstacks and who really wants to invite the barbarians in? Fair enough. I will leave that conversation for some other time except to say that as was astutely commented in my previous post, history itself is publicly contested – and not among historians [take, for example, the storm over Profs. Dower and Miyagawa’s class Visualizing Culture].

So, let me focus just on the first type of discussion that we must have – with our colleagues in computer science or those pursuing digital projects in other disciplines. How do we become connectors? What can we offer to the historian as incentive for learning another language? What scholarly benefit could there be to someone who has sunk 8-10 years mastering Arabic or Persian or Sanskrit in learning PhP or XML or Java? This is what we do; that is what they do. Pelikan, again, gives us a wonderful explication:

For it is the repeated experience of those who learn a second language, as it is of those who have always oscillated between their mother tongue and one or more other languages, that the other language sets them free from the confinement of one vocabulary, one semantic system, even one phonetic system, and thus gives them both a freer and a deeper awareness of their own language than they could have had if they had not learned to look at it, as it were, from the outside.

It is exactly the act of speaking with the computer scientist that provides us the scholarly incentive we need. To reimagine our archive digitally is not to déjà vu the print revolution Рit is to reimagine text itself. It is our conversation with the computer scientist that will allow us to see the future of the humanities or the discipline. It will give us another knowledge system to construct without the limitation of print. Yes. I said it, Print is Limited. Let me immediately clarify that I am not saying that print is dead [author is; god is]. We will always have the Book. Ok? Now, can we move on?

Imagine, if you please, a 13th c. Persian text – like the one that I am writing my dissertation on – which exists in 5 manuscripts, one translation done in 1900, and one critical edition done in the 1950s. To get that tenure, it would behoove me to re-assemble the five manuscripts, claim that the translation and the critical edition lack xyz and produce my own book. But, what if I reimagined the text anew. What if I scanned, annotated, tagged all five manuscripts and the translation into a comprehensible data-structure and presented the text so that the reader could peel, as it were, the layers of various recensions; read the translation against the manuscripts; follow the thread or theme in and out of various chapters? And coolest of all: What if my reader could annotate and tag and link my medieval persian text to another medieval persian text and another still? What if the texts spoke to one another and threads connect the reader, the text and the historian?

What do I need to make that text I described above: OCR to scan the text, some database to hold it, XML or SGML to markup the text with meta-information and structure, a query language to get the text back out of the database, CSS to display the data in a browser, some feeds or output available to the public, builtin tagging, comments and Viol√°! That is what my digital text would look like. I am sure, if you sat to imagine it, yours would look like something else. The point is that as historians we cannot use that imagination unless we speak with the programmer and unless we learn a bit of their language. All that is required is to expand our reading a bit. I highly recommend A Companion to Digital Humanities.

So, how do we get to the future of the humanities? My response is a programmatic one. I would like to see training in the tools of the digital trade available to every graduate student in the humanities. I would like to see articles on digital archives published in Journal of Asian Studies and other flagship publications. I would like to see established historians undertake digital projects employing graduate students. I would like to see divisional efforts to nurture and fund such initiatives. If 10 graduate students in South Asian history reading this went and created a digital archive each…we would have 10 digital archives in South Asian history. That’s how far behind we are [ok, I exaggerate a tiny bit]. The good folks at the Center for History and New Media at GMU are a shining beacon of hope in this regard.

Where do we go from here? How about a reading list for historians on digital humanities? How about a forum where historians could speak/converse with programmers? How about talking to these guys? Who’s with me?

Published by

sepoy

what is the vertiginous chapati saying to me?

14 thoughts on “The Polyglot Manifesto II”

  1. The key aspect here is to recognize the limits of print and reimagine the text digitally. If we all want to take philology seriously, that’s the only way to go. And for that, as Sepoy says, we got to pick up a couple of new languages too. Persian, Sanskrit, Tamil and Kannada won’t be enough to re-imagine the text.

    Anyway, ‘from pundit to programmer’ we should be able to speak with ease to all, including the Public who should be the beneficiaries of all this expertise. No point in turning all this into a private cultivation of the self and attaining tenure.

  2. no matter how technoleet history or the humanities become, it will never bridge the essential divide between the ivory tower and everyone else. i call this the “who gives a damn” divide. some monograph on shahjahanabad or the kakatiyas can be digitialized, hyperlinked, wikied, etc, etc, but its not going to make anyone any more interrested in the subject. the smoke and mirrors techno act will not take away from the essential problem, i am afraid. hell, the “who gives a damn” divide was supposed to be solved when scholars stopped publishing in latin and greek and started publishing in the dialect also…

  3. I’m sad I missed the Franke conf to get a boy scout tie, but whatever. I’m very interested in what you’re discussing here, and could easily use the efforts of others in whatever my work is now.

    My issue is with seeing the actual compiling, etc., *as my work*. This could be a disciplinary difference. http://eliotswasteland.tripod.com/, for example, strikes me as a limited alpha version of what you’re getting at, but even so, it’s got this feeling of being merely “reference.” Annotation, compilation, etc., strikes me as so unsexy… as so… historical.

    So like I said, maybe it’s a disciplinary difference, in which case I’m trying to wonder how English would react. Or I’m a snob. Or I don’t fully understand. Or some bombination.

  4. …you know, i’m also increasingly entranced by the possibilities for multilayered/hyperlinked digital mapping as a similar tool. i have a vague sort of feeling that people in geography and urban studies departments are a few years ahead of the rest of us in this. for me, the equivalent of your persian text might well be a map of, say, Istanbul in 1900, or Hatay in 1936, in which one could embed demographic data about the location and distribution of various ethnic/linguistic/religious communities, along with images, links to primary sources, and so forth. i have seen fragments of similar things (often as art projects rather than in formal ‘scholarly’ contexts per se) and it whets the appetite.

  5. some monograph on shahjahanabad or the kakatiyas can be digitialized, hyperlinked, wikied, etc, etc, but its not going to make anyone any more interrested in the subject.

    So, where’s your proof of this? How do we know that it wouldn’t make anyone any more interested in it?

    It’s quite unsound to start with the assumption that no one would be more interested. I may be a hopeless optimist, but I believe that the “smoke and mirrors techno act” is more than just an act, and does in fact help bridge you “who gives a damn” divide.

    I place far more blame on ivory tower academics, who stick their noses up at the idea of making their work freely available on the web and engaging with the public through digital methods. Just look at the interest and engagement about content at Wikipedia, the thousands of hits a day at the History News Network, and the thousands of hits at day at CHNM projects. The “history” tag is one of the more popular tags on del.icio.us.

    And I’d say that the “who gives a damn” divide did shrink greatly when scholarly works (and publications more generally) were published in more languages and made more accessible to the public.

  6. this is why we need more people like Sean Pue. allow me to get Urdu-ghazal-geeky and say that, given the tricky nature of the intertextuality of ghazals, it would be a huge boon to future generations of Urdu poetry lovers to have as many ghazals as possible up on the web with tags to indicate their mazmuns.

    most English poems are another kettle of macchliyaan, as each poem as a whole tends to express a particular theme. given that (except in the case of musalsal ghazals), every verse of the ghazal is a semantic unit unto itself, and relates thematically — or, rather, mazmuun-wise — to other shi`rs that might be in other ghazals, in other diwans, by other poets, performing mazmun-based criticism can be an exhausting task. believe me, i’ve tried.

    the only way to do it without running your finger through kulliyat upon kulliyat is to a) have a million verses memorized, as some people still do, or b) have a whole whack of them indexed on a computer. no doubt as notebooks, blogs, and websites continue to usurp the place of the human memory (or maybe i’m just speaking for myself…) plan a will be less and less feasible.

    basically, we need one of these for every ghazal-poet out there.

  7. Just look at the interest and engagement about content at Wikipedia, the thousands of hits a day at the History News Network, and the thousands of hits at day at CHNM projects. The “history” tag is one of the more popular tags on del.icio.us.

    That is popular history. Innundate those sites with the theses of scholarly works (if you can figure out what the hell said theses are), and see how many non-specialists you can get to care about how vic turner’s theories are being applied to pre-aurangzeb mughal india.

    And I’d say that the “who gives a damn” divide did shrink greatly when scholarly works (and publications more generally) were published in more languages and made more accessible to the public.

    you think scholarly work is published in language accessible to the public? Thats rich. I can understand Seneca better than I can understand HRM Spivak. And I dont speak Latin.

Leave a Reply