Altmetrics – What’s that all about?

August 30, 2013

Users of Symplectic will probably have noticed a little Altmetrics button next to the count of their citations. It will probably look a bit like this –


If you’re lucky and your work has created some Internet and social media activity then your altmetrics button could look more like this


What this means is that a company known as Altmetric ( has produced what are known as article level metrics for the work and assigned it with a weighted score, based on the number of times it has been mentioned in Tweets, on Facebook, Reddit and Google+, in blogs and in the media, and the number of times it has been saved or bookmarked in services like Mendeley and CiteULike, although not within ResearchGate which seems to be the academic social network of choice in 2013. The same metric is used in the Scopus database and keen readers of LOL will have observed that the creators of Scopus are themselves keen observers of altmetric scores and social media. However, don’t be concerned if your work didn’t score because, as Altmetric themselves acknowledge, the usual score for a scholarly work is zero as most academic articles don’t create this sort of interest.

One detects a certain amount of eye-rolling around the intersection of traditional scholarly publishing and social networking that this represents, particularly when well-cited articles that represent solid contributions to disciplinary knowledge receive low scores, but it does cast an interesting light on the question of academic metrics in general and in some cases may turn up useful new information that is not captured by traditional citation counts. Citation counting as a measure of significance is a phenomenon of the computer age, and is based upon the relatively simple notion that for one work to cite another is a significant act and that the number of citations can mark the degree of a work’s significance. This became apparent in the 1960s, when ground-breaking work by scholars like Watson and Crick in the sciences and Chomsky in the social sciences received large numbers of citations in Science Citation Index. This phenomenon extended to measuring citation rates for journals which led to Journal Citation Reports and the famous Journal Impact Factor. As the expectation grew that academic research could and should produce measurable outputs and results so did the interest in bibliometric measures, and the last ten years have seen new measures like the h-index (I will write another post about that soon) and a stronger interest in actions that can be taken to ensure that articles are not only published but cited as well.

One of the problems with the standard bibliometric approach has been that it only measures a limited number of markers in a limited number of places. Before the advent of Google Scholar the citations of an article could really only be found if it was published in a journal indexed by Web of Science or Scopus. Web of Science in particular was quite restrictive in the number of titles it covered and although citations could be found of work published in other journals, or of books and other types of publication, this was a cumbersome manual process. To some extent Google Scholar overcame this limitation but at the cost of delivering inflated counts (more on this to follow as well). One of the problems with straight citation counting is that every citation carries an equal value and, by extending the boundaries within which citations were counted, Google Scholar could give apparent significance to being cited in a paper that may itself have been rejected for publication (but still existed on the Internet) or even in a student essay. Parallel to this, even within traditional journals, “gaming” of citations by canny journal editors seeking to up their Impact Factors was becoming common.

Another complaint about citation-based bibliometrics was that it looked at only one specific measure in any case, citation of a paper by an article in a journal that was then indexed by one of a handful of databases. In some disciplines this happens less often than in others – it is standard practice in many of the sciences to “cite everything that moves”, or at least every article of possible relevance, whereas in literary studies, for example, relatively few journal articles are written and they tend to consist of direct examination of literary texts, with little requirement for citing the works of other authors. There’s nothing “wrong” with that paradigm it’s just different. In many fields, distinguished discipline-leading scholars have Scopus or Web of Science h-indices that are, or appear to be, shaded into insignificance by those of youngish scientists. Citation moves at a relatively slower rate in the humanities, and in the social sciences as well, and it can sometimes take several years before an article is “discovered” and begins to attract interest. And, of course, in these disciplines the publishing of books and book chapters is much more common, and these do not generally feed readily into the journal-centric h-index. In the creative arts, scholarly publishing per se is relatively unimportant, but there is a robust tradition of “serious” magazines and, more recently, blogging.

Over recent years other information about the impact of published work has become available, and these days publishers often let authors know how often their articles have been downloaded. On top of this, social networking has given readers a chance to enter the game themselves by tagging articles in Reddit or CiteULike, saving them in Mendeley, tweeting them or posting them on Facebook, or mentioning them in blog posts. Downloads are a tricky issue that I won’t go into here, but social activities are all suggestive of engagement or influence and could be seen as indicators that a particular work has made contact with an audience, and could even be seen as early indicators of likely citation activity. In an article in the Journal of Medical Internet Research called Can Tweets Predict Citations? Gunther Eysenbach found that “highly tweeted articles were 11 times more likely to be highly cited than less-tweeted articles”, but he also concluded that “tweetations” (some of you might want to take a walk in the fresh air for a minute) were not a substitute for citations as a proxy for scholarly merit but instead “should be primarily seen as a metric for social impact and knowledge translation”. There is a real difficulty here in disentangling the abstract idea of scholarly merit from the more easily observed phenomenon of popularity, but this problem really predates the twitterverse and the recent case of academic fraud by social psychologist Diederik Stapel is evidence that the race for citations has probably been distorting both research direction and research practice itself for many years.

So, should Massey researchers be concerned that their altmetric score in Symplectic is zero? The short answer to that is no, a score of zero is totally normal and does not mean that the work will not be cited and well regarded within the appropriate community of scholars. On the other hand, a higher score should be cause for satisfaction and it’s always pleasing to know that work is being read, or at least noticed – not all the tweeters will have read much beyond the title and even the act of saving articles to Mendeley may represent either good intentions or display behaviour, while Facebooking academic articles will not always be done for a serious purpose. Whatever the reason, though, a positive number is inevitably pleasing. The real problem arises, however, when the citation score remains at zero while the altmetric is relatively high – can anything be made of this in terms of describing the impact of the work? Here I’m going to stick my neck out and say maybe. A lot of tweets in itself may only represent a sort of Bieberian accident, exciting but transient and difficult to explain. If a case is to be made for the use of altmetrics as evidence of social engagement then just throwing out a number probably isn’t going to do very much and instead you need to look at the blogs that mention your work, and the media articles, to see who they are written by and what they say. Finding that information could shed interesting new light back on your work and what it means beyond the confines of academia.

The next question is how to get your altmetric figure above zero, but this posting has gone on long enough, so that can wait for another time. In the meantime, please retweet.

Bruce White
eResearch Librarian
eResearch on Library Out Loud

Leave a Reply

Your email address will not be published. Required fields are marked *

    Search posts