The purpose of measurement is to abstract a complex thing into a series of figures that can be arranged and rearranged. Despite the fundamentally flawed nature of some processes of abstraction, and irrespective of decades of furious chest beating on the subject, most of us in HE have become habituated to classification and ranking: our universities and departments are ranked, research productivity and teaching is measured, journals and presses compared and tiered, engagement is quantified, article re-tweets and Facebook shares are tracked, and so on. While often promoted as benign and objective, these evaluative measures are consequential: they impact student enrollment, funding allocation, as well as tenure and promotion. As Espeland and Sauder write, “The proliferation of quantitative measures of performance is a significant social trend that is fundamental to accountability and governance; it can initiate sweeping changes in status systems, work relations, and the reproduction of inequality.” And so, given their pervasive nature and ability “to foster broad changes, both intended and unintended, in the activities that they monitor, social measures deserve closer scholarly attention.” It seems that we have embraced, albeit to different degrees, the practice of ranking institutional repositories. I find this development problematic because it extends the language and logic of markets to an initiative that has heretofore been fueled by magnanimity, author rights, and social justice, first, and capital, second.

The Ranking Web of Institutional Repositories (RWIR), an initiative out of the Cybermetrics Lab at the Consejo Superior de Investigaciones Cientificas (Madrid), seeks to “to support Open Access initiatives and therefore the free access to scientific publications in an electronic form and to other academic material. The web indicators are used here to measure the global visibility and impact of the scientific repositories.” A standard opening. Subsequently, the RWIR’s mission statement devolves into a discourse embedded in market logic: “If the web performance of an institution is below the expected position according to their academic excellence, institution authorities should reconsider their web policy, promoting substantial increases of the volume and quality of their electronic publications.” The logic here is that once we have established benchmarks with which to judge whether an institution is “picking up the OA slack” via their institutional repositories, we can effectively guilt them into action. In contrast, the institutions that are knocking it out of the park can display another shiny accolade on the proverbial mantle. For those lagging behind, RWIR provides a “Decalogue of good practices” to boost the “position” of the institutional repository, which is ostensibly nothing more than a SEO brochure.

Rankings only work in a state of competition where decisions are predicated on quality indicators (for students, research funding, scholars, and accolades). After all, university administrators anticipate the yearly QS and THE results with baited breath not because of general interest, but because these social statistic mean something to consumers. One must only look at the reactions of high-ranking administrators in the wake of a new set of rankings – in the press and elsewhere – to get a sense of their value. If we focus on publishing, slightly more complicated market forces are at work. The relationship between “top ranked journals” and tenure and promotion committees has engendered a system where authors are competing for coveted real-estate in particular journals. Journal impact factors act as the quality indicator in some fields, whilst in others, formal or informal acknowledgements between peers denote which journals are worth pursuing; in both cases, some form of ranking dictates behaviour.

All of this is moot in the context of institutional repositories. Institutional repositories do not compete against one another for pre- and post-prints, and authors do not deliberate between different institutional repositories for two reasons. First, there’s no incentive to archive one’s work in one institutional repository over another; the oft publicized benefits of making one’s work available in an institutional repository (for instance, citation benefits) is cemented in the act of uploading, not the venue where it makes an appearance. Second, even if research that suggests some institutional repositories are indexed more regularly than others exists, institutional repositories, by their very nature, only support the work of scholars affiliated to that institution. The question, then, is not, “should I archive my research in X or Y repository?”, but simply, “should I make my research available?”.

On one hand I’m delighted to see that Western University’s Scholarship@Western and the University of Waterloo’s UWSpace – I am currently enrolled and working as a repository manager at Western, and working in the HSS library at Waterloo – are ranked 124th and 134th in the world, and 51st and 57th in North America, respectively. Setting aside the opaque nature of RWIR indicators, these rankings suggest that these institutions are committed to open scholarship. That’s great. I’m also not surprised that the University of Toronto’s TSpace and the University of British Columbia’s (beautiful) Open Collections are the top ranked Canadian institutions.

On the other hand, I can imagine a world where institutional repository rankings are used by institutional strategists to market a university’s level of “openness.” National funding bodies, like the RCUK, the Tri-Agencies, and the NSF are already trying to measure the incalculable, like impact, innovation, engagement, and reach: why not add another to the list? The byproduct of this tack would be a renewed sense of urgency in populating respective institutional repositories. To maximize rankings universities will subscribe to three well-known strategies: reallocate funds and resources, develop new policies that mandate certain outcomes, and identify ways of “gaming” the numbers. This would likely result in an increase in the absolute volume of scholarship available in institutional repositories; yet, the end-goal will not be informed by the moral values that we currently purport to uphold, but by a brazen profit motive.