Key Trends in Digital Humanities by Visiting Professor Alan Liu

I am very fortunate to be at the University of Canterbury in New Zealand while Professor Alan Liu is here for six weeks as a Visiting Fulbright Specialist (he is cofounder of and teaches at the University of California Santa Barbara). He has given two talks already and impressed me and other postgraduate students with his accessible and understandable explanations and compelling ideas about the place of the Digital Humanities in today and tomorrow’s world. One of my friends now “gets” why I am so interested in DH thanks to Liu.

His first talk on October 28, 2015, was entitled “Key Trends in Digital Humanities: How the Digital Humanities Challenge the Idea of the Humanities”.

He said the gold standard for humanists was hermeneutics and examining meaning-making. He then outlined five key trends in DH that we need to consider:

1 — “Data”: Quantified, Big, Structured

The new question at conferences is where’s your data? But humanists are not used to thinking about their work as data or considering how to analyze it in meaningfully interpretable ways. Compared to the science fields, our data is messy. Part of this is because we are dealing with heritage collections from the past which are hard to structure and made up of bits and pieces. In “Scraping the Social? Issues in Live Social Research” Marres and Weltevrede (2013) argue that data already has assumptions built into it.

In large-scale DH projects, we don’t currently have the apparatus to present large data in something like a block quote or with a reference to go check the library (you can’t cite 3,000 novels in a journal article). He mentioned the Dataverse Project. We now need something different from a library to meet our needs. Data provenance is a lot more important than library provenance (provenance = where something came from). The day is coming when you will have a colleague whose job is to curate and manage data, and that person will get tenure based on these tasks because they will be necessary to humanist work. Without them, we will give up and go back to reading one book!

The upside to lots of data is that you get access to seeing large shifts and patterns across time. But there is a danger in drawing seemingly conclusive meaning from big data as we seek to fit our results into our previous ways of knowing like genres and classes. This is a limited way of drawing meaning from data.

2 — Spatial Pattern

He mentioned several interesting projects and texts:

Although diagrams and maps are types of models, humanities scholars are more comfortable with representations than models and manipulations. But we need to learn these kinds of modeling skills to successfully analyze the models.

3 — Temporal Pattern (Narrative)

As humanists, we like to find linear patterns and tell stories. But hypertexts are not  linear. He cited Lev Manovich’s The Language of New Media (2001) which said the “database and narrative are natural enemies” (pg. 225). He also mentioned Karl Grossner’s “Topotime” (2013) and Wolfgang Ernst’s “Archives in Transition” (2013). We must deal with the differences between microtime and computer time and our human concept of time.

4 — Social Networks

The social network analysis model was originally designed to look at the relationship between people and organizations. But now things like the RoSE system are moving away from this goal and using books and plays instead (example: seeing how Shakespeare’s plays relate to one another). He mentioned Bearman and Stovel’s “Becoming a Nazi” (2000). DH is making humanists face up to the challenge of the crowd. It is no longer enough to call it culture and dismiss it.

5 — Topic Models

A good introduction to topic modeling is Ted Underwood’s “Topic Modeling Made Just Simple Enough” (2012). Basically, it infers from a large data set a discrete set of topics, treating documents as a “bag of words”. For example, you might put Moby Dick through a program and come up with several topics like whales and religion. The research sweet spot/problem is between the topics you already know and the ones you’re not sure about. Topic modeling is really trending. Ways to do this include visualizations in a word cloud or Andrew Goldstone’s Dfr-browser. Also check out Matt Burton’s “The Joy of Topic Modeling”. We are now in a probabilistic universe, where in science, they recognize that you can’t know definitively where an electron is when you’re modeling the atom. But in the humanities, we find that view frustrating because it is hard to tell a good story without a definitive answer! Example: instead of saying “Raskolnikov kills the old woman” you might only be able to say “There is a 74% chance that such and such event happened”.

A parting thought was a return to Plato’s Phaedrus and the belief that meaning is in our minds, not in books.


Liu brought up the current discussions about the material history of the book by Peter Stallybrass, University of Pennsylvania. In Liu’s classes, he goes back to the beginning of text and shows students how punctuation and spacing have not always been around. Actually, humans have been “text-encoding” forever and text has never been a static object. Although there were fears that the digital age would make it go away, it has deeply embedded itself into the source code and become really important.

Liu said that one critique of DH is that it is obsessed with distant reading, but it does close reading too. The key is balance. One method enables some things; the other enables other things. The scrambling of knowledge can be valuable too.

Liu emphasized that discursive knowledge is not the capstone of knowledge. And that the discursive narrative essay is not the only way to present information. Digital won’t become normalized until we get rid of this notion. We need to displace the relationship between that mode and other modes. (For example: instead of a thesis always being at the end, have students make a thesis at the beginning or middle.) The world wants reports and spreadsheets and parts of knowledge. The future lies in humanists forming small teams and producing research both for the public and other scholars.