Today is the second full day of the Triangle Scholarly Communication Institute, where I am part of a team that’s focusing on HuMetrics: Building Humane Metrics for the Humanities. We have each agreed to quickly blog some thoughts as part of our process; warning: what follows is quickly-crafted prose, full of rough edges.
The #HuMetrics team is making progress in attempts to reverse engineer that which we want to measure, by starting with the values than often govern the (in some ways broader, more comprehensive) range of work that we engage in (see Chris Long’s previous post for further discussion). We were able to distill a wide-ranging brainstorm of values into five categories, and have aims to think about how those value categories relate to the processes and products of our work. The important core of our conversation today centered around the fact that while metrics often seen (or are taken) as the end-goal, in point of fact the indicators always align to a value, and so part of the work here, in this free and open thinking space, is to be aspirational about the values we’d like to see elevated, incentivized, and rewarded. If openness as a value is prioritized, for example, one could imagine more weight given to scoring articles and/or journals that were OA rather than not. If that seems like an extreme example, it’s perhaps a worthy future exercise to consider the ways that certain ways of showing impact actually might already tweak the scale toward different values.
For our quick 20-minute afternoon exercise, each of us is taking a crack at writing about one of those five values: Equity, Openness, Collegiality, Quality, and Community. In our framework, Quality can take on one or more of the following characteristics:
- Pushing boundaries
- Advancing knowledge
Replication and reproducibility have a certain emphasis in some of the social sciences, but the terms might in other contexts also be thought of in the sense of extensibility. It’s important to note that these are preliminary notions, and we welcome your feedback.
Keep in mind that we’re considering metrics than can apply to multiple kinds of academic processes and outputs, so not just whether or not your article or book has high quality (measured currently, say, based on whether or not the article is in a journal with a high impact score, or if your book receives a certain number of citations), but whether or not you play a role in helping measure the degree of quality for an object (say, serving as a peer reviewer for a grant application, a reviewer for a book, a referee for an article, etc.). Both of these kinds of activities are part of the transaction related to “quality,” but currently we overwhelmingly incentivize and reward the former rather than the latter. What such a focus on the transactional value in the sense of quality does is unpack the transactional relationship and scholarly networks that undergird much of our work, disturbing the notion of individual acts of scholarship by revealing the deep relationships behind scholarly works.
Another challenge to current methods for measuring impact related to quality is the well-noted challenge of addressing context (such as whether or not a citation is a positive one or a negative one), and the degree to which such measurements lend themselves to a certain kind of gaming the system (through overuse of citations to drive up citation scores). How do we successfully implement speed bumps (rather than roadblocks) that require some small additional effort that will likely not prevent gaming the system but may, to carry the metaphor, slow it to a reasonable speed? The use of “active citation” (a precise and annotated citation and link to the source), as argued by Andy Moravcsik (PDF) in the context of political science research, is one potential method, especially for more qualitative work.
Follow team #HuMetrics as we wrestle with humanities metrics. We are Christopher Long, Rebecca Kennison, Stacy Konkiel, Simone Sacchi, Jason Rhody, and Nicky Agate, and we’ll be writing here all week.