Last Monday when Dr. Rajiv Jhangiani opened his keynote at the 2018 Open Education Summit, one of the first things he did was place his work in the context of bell hooks and Jesse Strommel. And after hearing this my internal voice said to itself, “O.K. now I know where he’s coming from.”
It’s an admitted generalization but let me suggest that when academics compose a scholarly article they tend to introduce their work with a positioning statement that expresses the tradition of thought that their work extends. This might be done explicitly like Dr. Jhangiani did in his keynote or quietly through the careful choice of whose definitions were used to set the table for the work.
The adjective ‘scientific’ is not attributed to isolated texts that are able to oppose the opinion of the multitude by virtue of some mysterious faculty. A document becomes scientific when its claims stop being isolated and when the number of people engaged in publishing it are many and explicitly indicated in the text. When reading it, it is on the contrary the reader who becomes isolated. The careful marking of the allies’ presence is the first sign that the controversy is now heated enough to generate technical documents.
Latour B. Science in action: how to follow scientists and engineers through society. Cambridge: Harvard University Press; 2005. p. 33.
If scholarly communication is a conversation, then we can think of journals as parlors, where you can expect certain conversations are taking place. If your work becomes a frequent touchpoint of these conversations you get… a high H-index?
As someone who is only five months into my position of Scholarly Communications Librarian, I’ve been particularly mindful of how people talk about scholarship and the various measures and analytics we use to describe scholarly work.
I try to keep in mind that metrics are just a shadow of an object. You can make a shadow larger by trying to angle yourself in various ways towards the sun but you shouldn’t forget that when you make your shadow larger this way, the object casting the shadow does not change.
I was approached recently by a peer who had a faculty member tell them that they are hesitant to add their work in the university’s repository because they are afraid that it would take away from the linking to their work on SSRN and thus would diminish their Google Scholar ranking.
What should be the response to these concerns? One thing we could do is reassure them that we are doing all we can [ethically] do to maximize the SEO of our IR.
But I believe that it would be better to express our work not in terms of links and citation counts but rather in terms of potential readership.
We could try to reframe the conversation so it didn’t seem so much of a zero-sum game. There are a set of readers who will discover work as a pre-print in SSRN and there will be another set of readers who will be interested in the work that they discover in an institutional repository. These interested readers could include a potential graduate student who is looking for an advisor to work for. It could be someone who has discovered the work in the IR because we allow other subject specific sites to index our institutional repository. It could be the local press. And, if the fears of SSRN link-cannibalization are still strong, we can always offer to place the work in the IR under a short-term embargo.
When we only think of metrics, we end up chasing shadows.
When faculty member assesses the quality of a peer’s work, they take the publication source as a measure of quality of that work. The unsaid rule is that each scholar, if they could, will always publish in the highest ranked journal in their field and any choice to publish anywhere else must be only because the work in question was not good enough. Any article published in a higher ranked journal is better than any article in a lower ranked journal.
And yet it’s easy to forget that the ranking behind ‘highly ranked journals’ are calculated using formulas that process the collected sum and speed of citation. In the end, journal ranking can also be re-considered as a measure of readership
Instead of positioning our work as ‘how to increase your h-index’, we should not forget that each citation is an author who we can also consider (perhaps charitably) a reader.
When I was the lead of Open Data Windsor Essex, we hosted a wonderful talk from Detroiter Alex Hill called Giving Data Empathy. What he reminded us in his talk was that behind each data point in his work was a person and that it was essential to remember how diminished that person is when they are reduced to a ‘count’.
Let’s remember this as well.
Every data point, a reader.