Archive for category: Statistics

Alt-metrics: A manifesto

October 28th, 2010 by
J. Priem, D. Taraborelli, P. Groth, C. Neylon (2010), Alt-metrics: A manifesto, (v.1.0), 26 October 2010. http://altmetrics.org/manifesto

No one can read everything. We rely on filters to make sense of the scholarly literature, but the narrow, traditional filters are being swamped. However, the growth of new, online scholarly tools allows us to make new filters; these alt-metrics reflect the broad, rapid impact of scholarship in this burgeoning ecosystem. We call for more tools and research based on alt-metrics.

As the volume of academic literature explodes, scholars rely on filters to select the most relevant and significant sources from the rest.

Unfortunately, scholarship’s three main filters for importance are failing:

(more…)

ReaderMeter: Crowdsourcing research impact

September 22nd, 2010 by

Readers of this blog are not new to my ramblings on soft peer review, social metrics and post-publication impact measures:

  • can we measure the impact of scientific research based on usage data from collaborative annotation systems, social bookmarking services and social media?
  • should we expect major discrepancies between citation-based and readership-based impact measures?
  • are online reference management systems more robust a data source to measure scholarly readership than traditional usage factors (e.g. downloads, clickthrough rates etc.)?

These are some of the questions addressed in my COOP ’08 paper. Jason Priem also discusses the prospects of what he calls “scientometrics 2.0″ in a recent First Monday article and it is really exciting to see a growing interest in these ideas from both the scientific and the STM publishing community.

We now need to think of ways of putting these ideas into practice. Science Online London 2010 earlier this month offered a great chance to test a real-world application of these ideas in front of a tech-friendly audience and this post is meant as its official announcement.

ReaderMeter is a proof-of-concept application showcasing the potential of readership data obtained from reference management tools. Following the announcement of the Mendeley API, I decided to see what could be built on top of the data exposed by Mendeley and the first idea was to write a mashup aggregating author-level readership statistics based on the number of bookmarks scored by each of one’s publications. ReaderMeter queries the data provider’s API for articles matching a given author string. It parses the response and generates a report with several metrics that attempt to quantify the relative impact of an author’s scientific production based on its consumption by a population of readers (in this case the 500K-strong Mendeley user base):



The figure above shows a screenshot of ReaderMeter’s results for social scientist Duncan J Watts, displaying global bookmark statistics, the breakdown of readers by publication as well as two indices (the HR index and the GR index) which I compute using bookmarks as a variable by analogy to the two popular citation-based metrics. Clicking on a reference allows you to drill down to display readership statistics for a given publication, including the scientific discipline, academic status and geographic location of readers of an individual document:

A handy permanent URL is generated to link to ReaderMeter’s author reports (using the scheme: [SURNAME].[FORENAME+INITIALS]), e.g.:

http://readermeter.org/Watts.Duncan_J

I also included a JSON interface to render statistics in a machine-readable format, e.g.:

http://readermeter.org/Watts.Duncan_J/json

Below is a sample of the JSON output:

{
        "author": "Duncan J Watts",
        "author_metrics":
        {
                "hr_index": "15",
                "gr_index": "26",
                "single_most_read": "140",
                "publication_count": "57",
                "bookmark_count": "760",
                "data_source": "mendeley"
        },
        "source": "http://readermeter.org/Watts.Duncan_J",
        "timestamp": "2010-09-02T15:41:08+01:00"
}

Despite being just a proof of concept (it was hacked in a couple of nights!), ReaderMeter attracted a number of early testers who gave a try to its first release. Its goal is not to redefine the concept of research impact as we know it, but to complement this notion with usage data from new sources and help identify aspects of impact that may go unnoticed when we only focus on traditional, citation-based metrics. Before a mature version of ReaderMeter is available for public consumption and for integration with other services, though, several issues will need to be addressed.

1. Author name normalisation

The first issue to be tackled is the fact the same individual author may be mentioned in a bibliographic record under a variety of spelling alternates: Rod Page was among the first to spot and extensively discuss this issue, which will hopefully be addressed in the next major upgrade (unless a provision to fix this problem is directly offered by Mendeley in a future upgrade of their API).

2. Article deduplication

A similar issue affects individual bibliographic entries, as noted by Egon Willighagen among others. Given that publication metadata in reference management services can be extracted by a variety of sources, the uniqueness of a bibliographic record is far from given. As a matter of fact, several instances of the same publication can show up as distinct items, with the result of generating flawed statistics when individual publications and their relative impact need to be considered (as is the case when calculating the H- and G-index). To what extent crowdsourced bibliographic databases (such as those of Mendeley, CiteULike, Zotero, Connotea, and similar distributed reference management tools) can tackle the problem of article duplication as effectively as manually curated bibliographic databases, is an interesting issue that sparked a heated debate (see this post by Duncan Hull and the ensuing discussion).

3. Author disambiguation

A way more challenging problem consists in disambiguating real homonyms. At the moment, ReaderMeter is unable to tell the difference between two authors with an identical name. Considering that surnames like Wang appear to be shared by about 100M people on the planet, the problem of how to disambiguate authors with a common surname is not something that can be easily sorted out by a consumer service such as ReaderMeter. Global initiatives with a broad institutional support such as the ORCID project are trying to fix this problem for good by introducing a unique author identifier system, but precisely because of their scale and ambitious goal they are unlikely to provide a viable solution in the short run.

4. Reader segmentation and selection biases

You may wonder: how genuine is data extracted from Mendeley as an indicator of an author’s actual readership? Calculating author impact metrics based on the user population of a specific service will always by definition result in skewed results due to different adoption rates by different scientific communities or demographic segments (e.g. by academic status, language, gender) within the same community. And how about readers who just don’t use any reference management tools? Björn Brembs posted some thoughtful considerations on why any such attempt at measuring impact based on the specific user population of a given platform/service is doomed to fail. His proposed solution, however – a universal outlet where all scientific content consumption should happen–sounds not only like an unlikely scenario, but also in many ways an undesirable one. Diversity is one of the key features of the open source ecosystem, for one, and as long as interoperability is achieved (witness the example of the OAI protocol and its multiple software implementation), there is certainly no need for a single service to monopolise the research community’s attention for projects such as ReaderMeter to be realistically implemented. The next step on ReaderMeter’s roadmap will be to integrate data from a variety of content providers (such as CiteULike or Bibsonomy) that provide free access to article readership information: although not the ultimate solution to the enormous problem of user segmentation, data integration from multiple sources should hopefully help reduce biases introduced by the population of a specific service.

What’s next

I will be working in the coming days on an upgrade to address some of the most urgent issues, in the meantime feel free to test ReaderMeter, send me your , follow the latest news on the project via or just help spread the word!

A general model of productivity?

June 15th, 2009 by

I want to try something a bit different in this post. Here at AP.com, we’ve talked a lot about tools, theory, trends and the general ephemera of academic productivity. But writing as academics, we should probably be trying to take this experience and build it into a cohesive model of productivity. So my goal here is to suggest a general model, one that we might use to understand what we’ve learned from previous posts and hopefully apply to our own work.

My starting point for this post was simple; I wanted to know how my productivity has changed (hopefully improved) since I first started my DPhil. From keeping a research journal, I know that some days are more productive than others and it would very helpful if I could understand when those fits and starts occur, to spot co-occuring events and thereby learn when to say “Forget work, I’m going for a run.”

In other words, I wanted to plot my productivity cycle over time. It might look something like this:

productivity_graph

But the obvious problem with this exercise is how to measure productivity. It’s a subject that’s been tackled indirectly on this site before but going through the old posts, I haven’t yet find any attempts at a general theory – and related measures – of productivity. So drawing on the collected wisdom of previous AP.com posts, here’s a rough sketch of such a theory.
(more…)

We are now a^H^H^H^H^H^H^H^H productivity blog

February 21st, 2008 by

I always wondered how people see the academic world from outside. How do we gauge the interest of the general public on what academics have to say (on average)? One easy way to look at this question is to see the how often people will read an article that has the word ‘academic’ on it.

A proxy on what people read nowadays is digg.com. And the tool to see how often people digg academic posts is now available in Dan Zarella’s blog. Given a keyword, the tool will return data on the average number of links accumulated by stories popular on Digg that mentioned that keyword. This is done with 2007 data.

Well, behold what happens when you enter “academic”:

clipboard2_21_2008 _ 19_07_34

And compare it to what you get when you type “productivity”:image

Why is this important? Well, on average, a single digg increases traffic by 0.10%. So a story that gets 3,000 diggs results in an increase in total traffic to the referring site by 300%.

So, from now on we are a^H^H^H^H^H^H^H^H productivity blog :)

The Difference Between Significant and Not Significant is Not Statistically Significant

December 11th, 2006 by

MINDLESS SIGNIFICANCE TESTING

pval

Decision science news has a post on hypothesis testing that I find relevant.

Some well-made points grow old while no one pays attention to them. One of the most embarrassing for social science is its categorical perception of p-values.

Tender of kindred Web site Andrew Gelman and Hal Stern have an article whose name says it all: The Difference Between “Significant” and “Not Significant” is not Itself Statistically Significant.

Link to The Difference Between Significant and Not Significant is Not Statistically Significant

 

Technorati tags: ,