The purpose of measurement is to abstract a complex thing into a series of figures that can be arranged and rearranged. Despite the fundamentally flawed nature of some processes of abstraction, and irrespective of decades of furious chest beating on the subject, most of us in HE have become habituated to classification and ranking: our universities and departments are ranked, research productivity and teaching is measured, journals and presses compared and tiered, engagement is quantified, article re-tweets and Facebook shares are tracked, and so on. While often promoted as benign and objective, these evaluative measures are consequential: they impact student enrollment, funding allocation, as well as tenure and promotion. As Espeland and Sauder write, “The proliferation of quantitative measures of performance is a significant social trend that is fundamental to accountability and governance; it can initiate sweeping changes in status systems, work relations, and the reproduction of inequality.” And so, given their pervasive nature and ability “to foster broad changes, both intended and unintended, in the activities that they monitor, social measures deserve closer scholarly attention.” It seems that we have embraced, albeit to different degrees, the practice of ranking institutional repositories. I find this development problematic because it extends the language and logic of markets to an initiative that has heretofore been fueled by magnanimity, author rights, and social justice, first, and capital, second.

The Ranking Web of Institutional Repositories (RWIR), an initiative out of the Cybermetrics Lab at the Consejo Superior de Investigaciones Cientificas (Madrid), seeks to “to support Open Access initiatives and therefore the free access to scientific publications in an electronic form and to other academic material. The web indicators are used here to measure the global visibility and impact of the scientific repositories.” A standard opening. Subsequently, the RWIR’s mission statement devolves into a discourse embedded in market logic: “If the web performance of an institution is below the expected position according to their academic excellence, institution authorities should reconsider their web policy, promoting substantial increases of the volume and quality of their electronic publications.” The logic here is that once we have established benchmarks with which to judge whether an institution is “picking up the OA slack” via their institutional repositories, we can effectively guilt them into action. In contrast, the institutions that are knocking it out of the park can display another shiny accolade on the proverbial mantle. For those lagging behind, RWIR provides a “Decalogue of good practices” to boost the “position” of the institutional repository, which is ostensibly nothing more than a SEO brochure.

Rankings only work in a state of competition where decisions are predicated on quality indicators (for students, research funding, scholars, and accolades). After all, university administrators anticipate the yearly QS and THE results with baited breath not because of general interest, but because these social statistic mean something to consumers. One must only look at the reactions of high-ranking administrators in the wake of a new set of rankings – in the press and elsewhere – to get a sense of their value. If we focus on publishing, slightly more complicated market forces are at work. The relationship between “top ranked journals” and tenure and promotion committees has engendered a system where authors are competing for coveted real-estate in particular journals. Journal impact factors act as the quality indicator in some fields, whilst in others, formal or informal acknowledgements between peers denote which journals are worth pursuing; in both cases, some form of ranking dictates behaviour.

All of this is moot in the context of institutional repositories. Institutional repositories do not compete against one another for pre- and post-prints, and authors do not deliberate between different institutional repositories for two reasons. First, there’s no incentive to archive one’s work in one institutional repository over another; the oft publicized benefits of making one’s work available in an institutional repository (for instance, citation benefits) is cemented in the act of uploading, not the venue where it makes an appearance. Second, even if research that suggests some institutional repositories are indexed more regularly than others exists, institutional repositories, by their very nature, only support the work of scholars affiliated to that institution. The question, then, is not, “should I archive my research in X or Y repository?”, but simply, “should I make my research available?”.

On one hand I’m delighted to see that Western University’s Scholarship@Western and the University of Waterloo’s UWSpace – I am currently enrolled and working as a repository manager at Western, and working in the HSS library at Waterloo – are ranked 124th and 134th in the world, and 51st and 57th in North America, respectively. Setting aside the opaque nature of RWIR indicators, these rankings suggest that these institutions are committed to open scholarship. That’s great. I’m also not surprised that the University of Toronto’s TSpace and the University of British Columbia’s (beautiful) Open Collections are the top ranked Canadian institutions.

On the other hand, I can imagine a world where institutional repository rankings are used by institutional strategists to market a university’s level of “openness.” National funding bodies, like the RCUK, the Tri-Agencies, and the NSF are already trying to measure the incalculable, like impact, innovation, engagement, and reach: why not add another to the list? The byproduct of this tack would be a renewed sense of urgency in populating respective institutional repositories. To maximize rankings universities will subscribe to three well-known strategies: reallocate funds and resources, develop new policies that mandate certain outcomes, and identify ways of “gaming” the numbers. This would likely result in an increase in the absolute volume of scholarship available in institutional repositories; yet, the end-goal will not be informed by the moral values that we currently purport to uphold, but by a brazen profit motive.

‡A note about this post: it starts off slow, but it picks up towards the end. Much like Labour’s showing in the most recent general election.

The first Strategic Mandate Agreement (SMA) between the government of Ontario and its universities and colleges, which was introduced by the Liberal government not without criticism in 2014, concluded in March 2017. The purpose of the SMA was to establish certain metrics for higher education at the provincial level (meaning metrics that would blanket and apply to all colleges and universities in the province), as well as specific institutional metrics that focus on particular strengths. At my current university (University of Waterloo), institutional metrics include things like, “Cumulative total of individuals employed by Waterloo’s start-ups created in the last three years,” “Average research funding received by tenure and tenure-track faculty members from non-Tri-Council sources over a three-year period,” and “Average number of research-funded collaborations/partnerships with industry, government, and NGOs over the last three years.” There’s talk of including these in the next iteration of the SMA (SMA 2), in addition to developing new ones. At first glance, these metrics are innocuous. In an increasingly austere environment, measurement and quantitative evaluation have become part and parcel of the machinery of academia at all levels, normalized for provosts, students, and everyone in between. However, having given this a little more thought, I think that the most sinister thing about these metrics is that they’re increasingly being self-imposed; that is, governments, in an act of ‘magnanimity’, are encouraging institutions to identify which measures they’d like to be held to account on. Having received the proverbial carrot, we seem to have shifted the conversation away from ‘why must we measure?’ to ‘fine, which measures shall we highlight?’ I’d like to concentrate on the third aforementioned metric (“Average number of research-funded collaborations/partnerships with industry, government, and NGOs”) for the remainder of this post, and approach it from a perspective informed by my experience in History and my current job as a librarian working in bibliometrics and research impact.

We all want to present the best version of ourselves, especially when funding is on the line; institutions of higher learning are no exception. As a result, when the onus of choosing which metrics should be highlighted falls on institutional shoulders, the natural inclination is to choose the ones that are most flattering. My institution is a world-class institution in STEM research and education (I throw around words like ‘Quantum’ and ‘Nano’ a few dozen times a day). The publishing culture in STEM fields embraced co-authorship and multi-author publications as the modus operandi decades ago. In most cases, the very nature of the research requires teams consisting of dozens of researchers, from across the world, doing a variety of tasks that I can’t begin to explain. In other cases, individuals are included as authors as a token of gratitude for reading early manuscripts or as a showing of respect. It, therefore, makes sense that they’d highlight international collaboration as a worthwhile measure. The problem is that at some point, co-authorship alone became the standard way of measuring collaborative scholarship, international or otherwise (for studies on co-authorship in STEM see this  and this).

A quick look at Web of Science data (I know this is an imperfect source, but just bear with me) illustrates how standardized measures of collaboration explicitly underprivileged those working in the Arts and Humanities, specifically History.

Nuclear Physics, 30.45% international collaboration

Engineering, E&E 13.2% international collaboration

History, 0.72% international collaboration

These numbers should not be surprising. They represent two different cultures of publishing that I need not describe here. The problem is that, if we begin to subscribe to self-imposed institutional metrics that overwhelmingly privilege one publishing culture over another, what’s to say that those metrics won’t facilitate disproportionate or inequitable incentive structures at the faculty level? Again, the most sinister part of this scenario is that we’d be doing it to ourselves. I’m not blind to the realities of academia; measuring research impact and productivity are not capricious trends. We have two options, as I see it: we could dig our feet in the ground and rage against the dying of the light, choosing instead to not engage with metrics in any fashion; or, we could start to think of ways to express to administrators, librarians*, analysts and governments that the work we do is profoundly collaborative.

So, are historians collaborative? I turned to “Acknowledgements” in research articles published in Gender & History and the Journal of Modern History to find out. I’ve always been interested in acknowledgement texts. Whether in books or research articles, I find that I learn a lot about the person whose work I’m about to engage with from the way that they express gratitude to young graduate students and colleagues (international or otherwise), show humility (or not), and communicate their love to their children and partners. This article in Applied Linguistics by Davide Simone Giannoni breaks apart “acknowledgement texts” into constitutive parts. As Giannoni writes, “acknowledgements are staged texts with a coherent rationale governing their rhetorical construction.” In general, through a “socially-accepted communicative framework,” authors articulate their debts “with enough ambiguity to reconcile the public and private realms of discourse.” Ultimately, “if acknowledgements have been discounted as an exercise in flattery, this is largely due to their misuse and exclusion from the peer review process.” This is a shame, given that “the genre’s formulaic, inventory-like appearance conceals a carefully worded rhetoric emphasizing academia’s most prized values: cumulative knowledge and intellectual integrity” [pg. 23-4]. Can you have these two things without collaboration?

My rushed analysis of an (admittedly anemic) sample of 33 articles published in the last year or so reveals that historians publishing in those journals are surprisingly collaborative, despite publishing as single authors. In G&H, 61% of authors acknowledged that their article would not have been possible were it not for significant edits, comments, and criticisms from at least one colleague at an institution in a country different from their own (in some cases the author referenced three or more international collaborators).† In JMH, a similar result: 64% expressed that they were extremely grateful for suggestions, comments, and edits on early drafts of their articles from colleagues abroad. Here’s what these acknowledgements, illustrated as lines or edges, look like on a map:

Obviously, I can’t confirm just how extensive these collaborations ran, or if they are just “exercises in flattery”; nonetheless, I would ask the authors the following question: if the publishing culture in the humanities mirrored that of STEM fields, would you feel comfortable listing the colleague to which you expressed your gratitude as a secondary author?‡

I’m not suggesting that we crowd our acknowledgement texts with collaborators (though, I wonder if we should begin a process of reconceptualizing ‘authorship’ in the humanities, to something like Blaise Cronin and others’ idea of associated ‘contributorship’). Not only are acknowledgements not a metadata field in most commercial databases, but it would lead to the very same problem that we’re seeing in STEM author fields. All of this is to say that we should not clock out at the very mention of measurement: we should pay close attention to the metrics that are chosen for us, and be present when given the chance to choose for ourselves. 

 


* Librarians working in bibliometrics are generally sympathetic and well-informed on the issue.

† It might not surprise you to know that Antoinette Burton came up again and again in the sample.

PostScript

‡ Since writing this post I stumbled across Nadine Desrochers, Adele Paul-Hus, and Vincent Lariviere’s chapter, “The Angle Sum Theory: Exploring the Literature on Acknowledgements in Scholarly Communication,” (http://crctcs.openum.ca/files/sites/60/2016/05/Desrochers_Paul-Hus_Lariviere_2016.pdf) in Cassidy Sugimoto’s (ed)Theories of Informetrics and Scholarly Communications (2016). They do a much better job explaining acknowledgement texts than I could ever hope to do.

 

This post was originally published as a MediaCommons Field Guide. You can read the original, along with other responses on the question of the future of the archive, here.

 

I’ll start off by disclosing that I am not an archivist; my perspective is informed by the time I’ve spent in archives as a researcher, and the work that I’ve been doing recently on digital historiography. In a way, I’m an outsider looking in. That being said, historians are introduced to and respect deeply the elements of archival theory that make their work possible, including provenance, authority, and context. I’m also aware that digital technologies have profoundly impacted the way that historians search for, perform, and disseminate research. In particular, historians are increasingly expecting, on one hand, to find primary sources on the web, and on the other, are encouraged, by funding bodies and institutions, to make material available online. This, in turn, has placed added pressure on archivists to allocate increased resources to improving catalogues and item descriptions, and provide full-text documents or high-resolution images whenever possible. The relationship is reciprocal. Practicing digital humanists have taken it upon themselves to develop curated online repositories using a variety of platforms to meet this demand and to support open access initiatives. While this practice is generally positive, I believe that considering an online repository as tantamount to an archive — gestured by our use of the “digital” qualifier — requires some critical attention.

In the most recent edition of the Debates in the Digital Humanities (available online), Jentery Sayers published a thought provoking piece titled “Dropping the Digital.”  In short, Sayers “ruins” the digital humanities through ruination, a technique whereby a text is manipulated and subsequently compared to the original text to identify differences and confirm or refute previous assumptions. Sayers “drops the digital” from a corpus, and combs through the product in order “to examine how its absence shapes meaning and interpretation.” Ultimately, Sayers’ essay encourages us to be reflexive about how and why we append “digital” in qualifying research. The way I see it, a comparable act of uncritical qualification is occurring on the web with the recent explosion of so-called “digital archives.”

The proliferation of low-barrier of entry and low cost digital repository and content management systems, like Omeka and DSpace, has led to the creation of hundreds (if not thousands) of online repositories housing digital artifacts. Artifacts are digital copies of analog materials, or repositories of borne digital documents, or both. Importantly, non-archivists often create these repositoriesthey are open access, and are sometimes referred to as “digital archives.” The final point requires attention. How does “dropping the digital” from “digital archives” inform our understanding of these online repositories? How are they different from the “physical” archive that we are so familiar with?

Perhaps this is all just a natural shift in what the word “archive” means to people, prompted by digital methodologies and tools. However, I’m in agreement with Kate Theimer as she argues that the colloquial use of the term “archive” to denote simply “a purposeful collection of surrogates” is problematic due to “the potential for a loss of understanding and appreciation of the historical context that archives preserve in their collections, and the unique role that archives play as custodians of materials in this context.” Indeed, the act of archiving is not simply an arrangement of curated artifacts; materials undergo a strict process of appraisal according to principles of provenance, among others. And while archival institutions are not without criticism*, I think it’s important that we remind ourselves of what we’re overlooking when we co-opt the term “archive”, a term laden with symbolic meaning, for our digital repositories. Without a doubt, the digitization work that we undertake in cooperation with institutional libraries and community organizations is significant and worthwhile; however, the very act of attempting to create a “digital archive” is deeply informed by a value system embedded in Western ways of knowing. Ultimately, the creation of digital collections will continue, as it is a trend fueled mainly by principles of accessibility and is therefore commendable and much needed. However, humanities scholars that are turning to and creating these digital resources must think critically about why and how they are created, and how they might affect new scholarship and knowledge.

* See, for example, Wood, Stacy, et al. “Mobilizing records: Re-framing archival description to support human rights.” Archival Science 14 (2014): 397-419.

About three weeks ago, back when the world made sense and talk of Donald Trump in the Oval Office was almost exclusively preceded by a “wouldn’t it be hilarious if,” Robert Darnton, Carl H. Pforzheimer University Professor and  University Librarian Emeritus at Harvard, mused about the future of the Library of Congress in a Clinton administration. “When the new president, if she is Hillary Clinton, moves into the White House,” asks Darnton, “will she unpack her library in the spirit of Walter Benjamin—releasing memories of adventures attached to books? Not likely.” A sensible conclusion. Darnton continues, undeterred: “the arrival of a new president at this moment, not long after the dawn of the digital age, could open an opportunity to reorient literature and learning in a way that was envisioned by the Founders of our country, one that would bring books within the reach of the entire citizenry.” This in the wake of President Obama’s precedent-setting appointment of an African-American woman, Dr. Carla Hayden, as the Librarian of Congress. (Hayden’s defence of patron privacy in the wake of the Patriot Act is the stuff of legend in library circles). Darnton must have thought the time ripe: “Two powerful women located at opposite ends of the axis between Capitol Hill and the White House could revive cultural institutions, restore the public domain, and repair the fault lines that run through our information system. More power to them.”

Having reread Darnton’s hopeful article I cannot help but feel a sense of dread. In the piece, Darnton outlines three specific points where Hayden should make ground: support open-access publishing and challenge publisher oligopolies; team up with Google, the Internet Archive, and HathiTrust to kick digitization initiatives into overdrive and broaden access; and, undermine lobbying efforts to extend copyright while bolstering the Fair Access to Science and Technology Research Act (FASTR). For after-all, “Her library is like no other in the country,” writes Darnton, “It is a national institution, the main repository of our country’s culture.” A tall order, no doubt, but it is likely that Clinton would have supported Hayden in these endeavours — not out of pure altruism, but out of her proven commitment to public goods, like education and health, and her stance on copyright restructuring.

What will the Trump library look like as of January? Impossible to tell at the moment, mainly because he hasn’t been forthright about any of these issues. Nevertheless we can tease bits out of his demagoguery.  His market-driven approach to public goods makes everyone that works or thinks about public institutions, like universities and public libraries, quiver; one thing is for certain, Trump will not be the second Carnegie. His open critique of the TPP puts him, strangely enough, on the side of copyright reform; but this is a red-herring. He doesn’t so much care about the absurd copyright agreements proposed by the TPP but the jobs of American workers that would be jeopardized. While he hasn’t gestured to FASRT directly, Trump’s general stance on broadening information access is disquieting. First, it’s clear that Trump has no idea how the internet functions. “We have to go see Bill Gates and a lot of different people that really understand what’s happening,” he shouted. “We have to talk to them maybe in certain areas closing that internet up in some way.” Second, Trump’s disdain of journalism is well-documented, time and time suggesting that he’d like to see libel laws extended to curtail free-speech. And finally, given his criticism of bloated government, it’s possible that Trump will support having the Copyright Office become an independent organization outside the purview of the Library of Congress, opening it up to concentrated lobbying.

Carla Hayden is the most influential information professional in America, but the Librarian of Congress’ voice could not be heard amidst the cacophony following Tuesday’s election. This is telling, I think. I only hope that in the months to come she restates her commitment to fulfilling the mission of the Library of Congress—that is, to broaden access to information for all citizens. Yes, Hayden’s appointment is barrier-breaking. But going forward she has an opportunity to set a precedent for the entire profession. Now is not the time for stereotypical meekness and neutrality: she must be loud, boisterous, and political.