Sanli 5 May 2012

## Academia can benefit a lot from a more democratic funding system

Tags: , ,
Posted in Ethics, politics, Web 2.0

Much more has been said about the failure of current grant system than that has actually changed. My favorite opinion piece is this one by Peter A. Lawrence. The single-sentence abstract says it all: “The granting system turns young scientists into bureaucrats and then betrays them.” There are a couple of suggestions for improving the funding distribution in that article but the title of a comment by Markus Noll says enough about why nothing is changing: “Scientists in power will never change their system unless forced.”

Beside the unbalanced budget for research which is outside scientists’ control, part of the current deficiency is due to the slow and in-transparent peer review system for both grants and publications. I should emphasize that in my own opinion, scientific peer review is still a much better evaluation for merit than any quantitative measure advocated by publishers and some politicians. The main problem is what physicists know well about: the shot noise. A grant proposal or an article is still reviewed by a handful of peers which surely have conflicting interests with the author, the minimum being the limited budget itself. The unavoidable confidentiality does not help to increase the morality of the review process. It may make it more streamlined or decisive, but for sure more vulnerable to abuse both in favor or against the authors or applicants. One can only hope that these “unfortunate cases” average out when consulting a larger community of peers. This will also result in much lower work load on every reviewer, and also the authors. The problem is that with the old-fashioned bureaucracy of funding agencies and publishers, that would take forever.

With advances in communication tools and online social interaction, it is nowadays much easier and cheaper to perform numerous large scale polls with various degrees of control. Even for those who complain about the ineffectiveness of massive equal-vote polls, the web has developed several means of controlling and enhancing the mechanism. One example is the progress of massively multiplayer online role-playing games (MMORPG) which is nowadays a tool for sociology and psychology research. These technologies can easily increase the efficiency of peer-review, speed it up and greatly enhance the signal-to-noise ratio of the whole process (some people even say that a random selection will do just as good as the current peer-review system!).

It is surprising to me that the scientific community, being at the forefront of advances in technology is still lacking in using these tools. Or maybe it is just politically discouraged, or restricted to insensitive processes like the election of representatives for scientific societies.

- - - - - -
If you like this post why don't you email subscribe to our new posts. Or subscribe to our RSS feed.
1. 5 May 2012 23:02, Scott Wagers

Interesting post and interesting idea.

I wonder if using social media tools as you suggest will do anything to address the regression to the mean. Perhaps that is not the right terminology, but what I am referring to is the tendency for there to be an accepted norm in science and anything outside that norm is discounted.

Yet truly innovative breakthrough science is that which goes beyond the norm. Will the use of social interaction tools improve or worsen this?

2. 7 May 2012 11:04, Jacopo Bertolotti

I understand the rationale behind your argument but I am still not totally convinced by MMORPG-like peer-review. If before publishing I have to submit my paper to the judgment of the whole community isn’t it faster to just upload it to ArXiv (or any similar service) and let the community decide? And why the whole community should be willing to give a review of my work when a very large number of other papers are presented (submitted/published) at the same time? Wouldn’t it be huge waste of time?
I am also unhappy with the current system but, so far, I have failed to find any realistic solution.

3. 7 May 2012 15:33, Ludovico Cademartiri

I agree with the arguments of Scott and Jacopo. The problem with peer review is in its ability to penalize extraordinary research. If anything, one could almost argue that peer-review is too democratic.
I am still struggling to think of a model that would reduce workload, preserve pioneering research and discard non-rigorous science…

4. 7 May 2012 20:20, Gijs van Soest

Sanli, interesting piece, it provoked a few thoughts.
Jacopo makes an important point: in this model, paper/grant reviews would be performed by those with time on their hands to voluntarily sit down and review/vote on a submission. Many people with good ideas and a broader view on their field will have that pretty low on their priority list.
While I share everyone’s concerns about the occasional practical ineffectiveness of peer review, I think that it is in principle a pretty good system, and my experiences with it are fairly positive overall. A good in-depth review can really, really improve a paper, and science benefits from this. A paper is out there forever with your name on it, and I have been grateful more than once to a reviewer who rejected my work for good reasons. Yes, there are ill-mannered reviewers with too much power in anonymity. Tough luck, that happens.
The grants problem is scarcity and nothing else. If your grant is reviewed by a conscientious reviewer chances are larger it will be rejected than by the prejudiced friend that you suggested as a reviewer. The conscientious reviewer will criticize constructively, suggesting improvements and rating your proposal Very Good Plus instead of Excellent, and your project will not be funded. He did what a good reviewer should do. Instead, the one with the most friends wins. In reality, Very Good proposals should be funded and truly Excellent proposals are so few and far between they can barely be a real burden on the funding system.
Another aspect of crowd-sourced reviewing is that the review process itself will generate public content related to an unpublished work. How do you treat that information after a decision has been reached? There may have been valuable insights: keep it online and you have ArXiv — no need to publish elsewhere; take it off and you lose the learning ability of the system.

5. 9 May 2012 4:07, Frerik van Beijnum

Very nice discussion. With respect to the peer review, I think the imperfections of peer review will average out over your career. This may be hard to accept when you get your first unfair rejections. I do feel that peer review quality can be increased, some journals accept one reviewer instead of two or three. I think there is a large risk taken with only one reviewer. The advantage of two reviewers is that a third is invited in case of contradicting reports.

I would be curious if rewarding reviewers would improve the system. What if people get an R index too, based on ratings of their referee reports by the editors.

Concerning the funding, I feel that having friends and having a good reputation are mixed here. If I would get to review a friend his work, I would be more enthusiastic about the topic, as I an familiar with it. However, being his friend, I might also be more critical as I know his expertise available, and the lack thereof. In other words, the grass may look greener on the other side. In the end you judge partially on someone his reputation. To have a good reputation, it helps to be visible and showing that you are making good progress. In this process, you might get new friends. Ending up with a chicken and egg situation.

6. 9 May 2012 21:36, Ad Lagendijk

Sanli,
interesting suggestion. I have not much to add to the discussion but for the conflict of interest. In general reviewers of grants are usually chosen such that they cannot apply for the same grant. For instance national program of country X will choose reviewers from non-X countries

7. 10 May 2012 9:22, Mirjam

From personal experience I get the impression that the peer review system works remarkably well, both for articles and grants. The distribution in throughput time is quite broad and one can be on the wrong end of that spectrum sometimes, but frequently the whole process still takes place in a reasonable time on the scale of the academic research pace. Also, it is often not the reviewers that make a decision, but their opinion is used by the editor or granting committee to arrive at a decision. If those people take their work seriously they consider the review reports critically and use them for what they are worth (as a committee member it would not be very rewarding if you were not allowed to have your own opinion and are just there to add up the scores on the review reports). In the worst case scenario one can always appeal a decision and point it out when a reviewer is clearly biassed or unfair. The question that remains is whether we want to fund all research through such competitive granting schemes. I think this tends to select for a particular kind of research and a particular kind of researcher, while there is a lot of other high-quality stuff/people out there. Moreover, instead of universities identifying for themselves good research(ers) in all their diversity the decision is left to an external committee that has nothing to do with that university. I think institutions should play a much stronger role in identifying their own excellence and supporting this with solid funding (which no longer exists….)

XHTML: You can use these tags:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*

By submitting a comment here you grant this site a perpetual license to reproduce your words and name/web site in attribution.

Subscribe without commenting