I recently read an article on academic journal (and article) rankings (and another one here). You might expect, given that I am a connoisseur of ranking methods, that I would support the mission to rank academic journals (and articles). I do not. No matter the method, ranking the "impact" is an over-simplification to popularity. I found out from The Atlantic that the system of impact ranking had been invented by librarians in the '70s merely to evaluate which journals were the most important in each field. Now tenure committees are using them to decide whether an academic has done good work. Huh? Shouldn't a tenure committee actually, you know, read the articles in question to decide whether the work is good? Isn't that the committee's job: to bring expert opinion to bear and to make subtle, careful, and thoughtful judgments?
The problem, of course, is that there are actually two different variables at play, rather than "impact." The real variables are trustworthiness and importance. If the research method is sound, if the authors understood the literature correctly and asked reasonable follow-up questions, if the data are interpreted correctly, then the research is trustworthy. There is a lot of trustworthy research that is done, period. But there is a lot of trustworthy research that is never, ever published because it fails to meet a high threshold of importance. Novel results that turn a field on its head are important. Results that create a new field of research are important. Results that really expand or complicate a field are important. Current academic journals are heavily biased to publish results that are important by this definition -- novel, theory-creating or -expanding or -re-defining. This tendency is one that makes a lot of sense, but in the aggregate, it means that published research is less likely to be trustworthy, since "important" results can often reflect a design flaw in the research. At the very, very least, academic committees ought to score articles along both axes -- trustworthiness and importance -- and journals (or e-publications) ought to publish a lot MORE trustworthy, UNimportant articles. I, for one, would like to know, "Hey, 50 researchers had good but boring results with this theory," to know that an idea has been confirmed and re-confirmed by experiment before building upon it. I would be especially interested to know that a theory had been tested and failed over and over again, so I would not waste my time.
But there is an even more radical solution. What if e-publication of research took full advantage of the tools that already exist to make publication as beneficial as possible all around, to really build an easy-to-use knowledge engine? Three specific capabilities come to mind: 1) links, 2) data, and 3) commenting. The links part is clear: if enough authors went to e-publication, we could track down an article's set of citations more easily than currently. The data part is clear, too: Google docs' spreadsheet and Tableau, for example, are both ways that researchers could publish their raw data by embedding it in the article. Often I have wished I could look at the raw data and run the numbers for myself. Errors do happen. And I think it would cut down on the statistical chicanery if the norm was to publish raw data as well as the polished tables and p-values.
The best addition of all would be to add commenting. What if I could read an article with comments by other experts in the field? (Of course, there would need to be some kind of threshold set for ability to comment.) If I could read a paragraph on the research method, then immediately see several other expert researchers' key concerns about it, I would be in a better place to judge the method as trustworthy or not. If I could read a paragraph about the theory being advanced, then immediately see several other experts' alternatives to said theory, again, I would be in a better place to judge the importance of the article. If a specific paragraph was linked to or bookmarked by a bunch of other academic articles, well that would tell be even more. Comments, being qualitative, would provide a wealth of information that the quantitative information about pageviews, citations, etc., can never do. Yet it seems like the direction universities (specifically, tenure committees) want to go is simply ranking. Ranks make sense for sports, or debate teams, because it is a closed system with only one outcome: wins. Academic publication is unlike this and ought to be "scored" differently, or not even really scored at all. My fantasy is that articles -- one per webpage -- would be the quanta of a much wider ranging debate in the links, highlighting, quotations, line-item comments, and end-of-article comments and reviewer votes. My fantasy is that a flawed article is obviously labeled as flawed, so that readers could immediately tread with caution. If you're more cynical, then at least you could hope for a system where readers know which articles represent solid, consensus-based work and which are contentious. The current system, where flawed work is almost never retracted and a reader has to work hard to find out whether an article has been retracted or discredited, serves no one's interest.
Maybe, as this article implies, the problem is that journals are doing what's best for the journal (and these publishers are for-profit), not what's best for science. Here's one more article, on the brave new world of publishing that might await. Another one, on the statistical importance of publishing null results; finally, one more on the problems with PLOS.