On Research Metrics and Acknowledgement Texts

‡A note about this post: it starts off slow, but it picks up towards the end. Much like Labour’s showing in the most recent general election.

The first Strategic Mandate Agreement (SMA) between the government of Ontario and its universities and colleges, which was introduced by the Liberal government not without criticism in 2014, concluded in March 2017. The purpose of the SMA was to establish certain metrics for higher education at the provincial level (meaning metrics that would blanket and apply to all colleges and universities in the province), as well as specific institutional metrics that focus on particular strengths. At my current university (University of Waterloo), institutional metrics include things like, “Cumulative total of individuals employed by Waterloo’s start-ups created in the last three years,” “Average research funding received by tenure and tenure-track faculty members from non-Tri-Council sources over a three-year period,” and “Average number of research-funded collaborations/partnerships with industry, government, and NGOs over the last three years.” There’s talk of including these in the next iteration of the SMA (SMA 2), in addition to developing new ones. At first glance, these metrics are innocuous. In an increasingly austere environment, measurement and quantitative evaluation have become part and parcel of the machinery of academia at all levels, normalized for provosts, students, and everyone in between. However, having given this a little more thought, I think that the most sinister thing about these metrics is that they’re increasingly being self-imposed; that is, governments, in an act of ‘magnanimity’, are encouraging institutions to identify which measures they’d like to be held to account on. Having received the proverbial carrot, we seem to have shifted the conversation away from ‘why must we measure?’ to ‘fine, which measures shall we highlight?’ I’d like to concentrate on the third aforementioned metric (“Average number of research-funded collaborations/partnerships with industry, government, and NGOs”) for the remainder of this post, and approach it from a perspective informed by my experience in History and my current job as a librarian working in bibliometrics and research impact.

We all want to present the best version of ourselves, especially when funding is on the line; institutions of higher learning are no exception. As a result, when the onus of choosing which metrics should be highlighted falls on institutional shoulders, the natural inclination is to choose the ones that are most flattering. My institution is a world-class institution in STEM research and education (I throw around words like ‘Quantum’ and ‘Nano’ a few dozen times a day). The publishing culture in STEM fields embraced co-authorship and multi-author publications as the modus operandi decades ago. In most cases, the very nature of the research requires teams consisting of dozens of researchers, from across the world, doing a variety of tasks that I can’t begin to explain. In other cases, individuals are included as authors as a token of gratitude for reading early manuscripts or as a showing of respect. It, therefore, makes sense that they’d highlight international collaboration as a worthwhile measure. The problem is that at some point, co-authorship alone became the standard way of measuring collaborative scholarship, international or otherwise (for studies on co-authorship in STEM see this  and this).

A quick look at Web of Science data (I know this is an imperfect source, but just bear with me) illustrates how standardized measures of collaboration explicitly underprivileged those working in the Arts and Humanities, specifically History.

Nuclear Physics, 30.45% international collaboration

Engineering, E&E 13.2% international collaboration

History, 0.72% international collaboration

These numbers should not be surprising. They represent two different cultures of publishing that I need not describe here. The problem is that, if we begin to subscribe to self-imposed institutional metrics that overwhelmingly privilege one publishing culture over another, what’s to say that those metrics won’t facilitate disproportionate or inequitable incentive structures at the faculty level? Again, the most sinister part of this scenario is that we’d be doing it to ourselves. I’m not blind to the realities of academia; measuring research impact and productivity are not capricious trends. We have two options, as I see it: we could dig our feet in the ground and rage against the dying of the light, choosing instead to not engage with metrics in any fashion; or, we could start to think of ways to express to administrators, librarians*, analysts and governments that the work we do is profoundly collaborative.

So, are historians collaborative? I turned to “Acknowledgements” in research articles published in Gender & History and the Journal of Modern History to find out. I’ve always been interested in acknowledgement texts. Whether in books or research articles, I find that I learn a lot about the person whose work I’m about to engage with from the way that they express gratitude to young graduate students and colleagues (international or otherwise), show humility (or not), and communicate their love to their children and partners. This article in Applied Linguistics by Davide Simone Giannoni breaks apart “acknowledgement texts” into constitutive parts. As Giannoni writes, “acknowledgements are staged texts with a coherent rationale governing their rhetorical construction.” In general, through a “socially-accepted communicative framework,” authors articulate their debts “with enough ambiguity to reconcile the public and private realms of discourse.” Ultimately, “if acknowledgements have been discounted as an exercise in flattery, this is largely due to their misuse and exclusion from the peer review process.” This is a shame, given that “the genre’s formulaic, inventory-like appearance conceals a carefully worded rhetoric emphasizing academia’s most prized values: cumulative knowledge and intellectual integrity” [pg. 23-4]. Can you have these two things without collaboration?

My rushed analysis of an (admittedly anemic) sample of 33 articles published in the last year or so reveals that historians publishing in those journals are surprisingly collaborative, despite publishing as single authors. In G&H, 61% of authors acknowledged that their article would not have been possible were it not for significant edits, comments, and criticisms from at least one colleague at an institution in a country different from their own (in some cases the author referenced three or more international collaborators).† In JMH, a similar result: 64% expressed that they were extremely grateful for suggestions, comments, and edits on early drafts of their articles from colleagues abroad. Here’s what these acknowledgements, illustrated as lines or edges, look like on a map:

Obviously, I can’t confirm just how extensive these collaborations ran, or if they are just “exercises in flattery”; nonetheless, I would ask the authors the following question: if the publishing culture in the humanities mirrored that of STEM fields, would you feel comfortable listing the colleague to which you expressed your gratitude as a secondary author?‡

I’m not suggesting that we crowd our acknowledgement texts with collaborators (though, I wonder if we should begin a process of reconceptualizing ‘authorship’ in the humanities, to something like Blaise Cronin and others’ idea of associated ‘contributorship’). Not only are acknowledgements not a metadata field in most commercial databases, but it would lead to the very same problem that we’re seeing in STEM author fields. All of this is to say that we should not clock out at the very mention of measurement: we should pay close attention to the metrics that are chosen for us, and be present when given the chance to choose for ourselves. 

 


* Librarians working in bibliometrics are generally sympathetic and well-informed on the issue.

† It might not surprise you to know that Antoinette Burton came up again and again in the sample.

PostScript

‡ Since writing this post I stumbled across Nadine Desrochers, Adele Paul-Hus, and Vincent Lariviere’s chapter, “The Angle Sum Theory: Exploring the Literature on Acknowledgements in Scholarly Communication,” (http://crctcs.openum.ca/files/sites/60/2016/05/Desrochers_Paul-Hus_Lariviere_2016.pdf) in Cassidy Sugimoto’s (ed)Theories of Informetrics and Scholarly Communications (2016). They do a much better job explaining acknowledgement texts than I could ever hope to do.

 

Leave a Reply

Your email address will not be published. Required fields are marked *