These are times of upheaval in scholarly communication. One thing that seems clear is that open access will prevail: since most scientific research is directly and indirectly funded through public money, it is simply inescapable that the public should have open access to the results of the research. And it is also inescapable that there needs to be careful stewardship of that public money and that it should not be syphoned off to support the large profit margin of legacy publishers. So, let’s accept that open access and fair pricing are non-negotiable and inevitable. What’s next?
What made open access feasible was the advent of the internet and the possibility to disseminate research papers quickly and without access controls. Many of us make our manuscripts available on disciplinary sites such as LingBuzz, the Semantics Archive, PhilPapers, and so on. In many ways, those sites are the primary way that new results first reach the community.
Beyond that, what else do we really need? Isn’t posting papers on such archives all that’s required to keep the engines of collaborative scientific progress well-oiled? Do we need peer review, do we need journals?
Since I am the co-founding editor of a staunchly peer-reviewed journal, with a rather draconian rejection rate, you might think that my answer will be unambiguous, but in fact, I don’t think these are easy questions nowadays.
What do journals actually offer? Here are the main considerations:
- peer and editorial feedback to authors
- curation: selecting and highlighting the best work
- income for publisher
The main costs for running a journal are in the latter three categories. The editorial work and the work of the peer reviewers is typically pro bono, covered by their employers. Some journals may pay the editor a small stipend or expense account, but that is the exception in linguistics at least.
Is it all worth it?
Authors at S&P seem to value the intensive and extensive feedback they receive. Is there any other mechanism by which authors can reliably receive such feedback? Experiments with open peer review have not taken off, at least in linguistics, but maybe it’s worth a try. One might hope that authors, especially junior ones, get ample feedback from their mentors and peers before a paper is injected into the publication pipeline. But judging by what gets submitted to S&P, I’m not so sanguine.
Do we need curation? If every paper is available in the disciplinary archives, how do readers decide which are worth the investment of a day of intense study, or at least an hour of cursory reading? Will established authors have a lock on the attention of potential readers?
Do tenure & promotion committees need the validation that comes from a paper having been published in a reputable journal? Shouldn’t they simply go by the considered opinion of the external letter writers and maybe by the objective citation record (keeping in mind, again, that much of the scientific communication happens through disciplinary archives and other ways of exchanging papers and drafts, so that citation archaeology should be maximally permissive, that is, more like Google Scholar than Web of Science).
Do we need copy-editing and type-setting? When we started S&P, it was clear to us that a new-fangled open access journal needed to have a very professional “look” to its articles. That together, with my frankly out-of-control obsession with typographic precision, lead to a very labor-intensive production process. Our competitor journals outsource this step to companies that do not have disciplinary expertise, for the most part. They also don’t necessarily offer copy-editing at all. The typographic results are also somewhat problematic. So, S&P can be proud of its presentation. But it is a major pain-point nevertheless. More on this later.
Now, these aspects of journals are not inextricably linked. We could easily unbundle them. And I think we should. We should experiment with a good number of models and see which ones work and which ones don’t. We may end up with a much more interesting and fruitful landscape of publication avenues.
Here are some of the options I see:
We could have “journals” that simply are listings of articles in the archives that the editors consider highlight-worthy. Something like the curated playlists on music streaming services such as Spotify or Apple Music. Don’t know what to listen to among the millions of songs? Let an Apple Editor make the choice for you. Don’t know what papers to read among the hundreds on LingBuzz? Let our editorial board guide you.
A considerable step up from that are “overlay journals” (two examples). Here, authors post their manuscripts to an archive and also submit them to the overlay journal, which conducts standard peer-review, asks for revisions, and, perhaps but not necessarily, takes charge of copy-editing and typesetting. Accepted articles are updated on the archive and the journal links to the archived article from its table of contents. I think this is a very promising model.
Another model is to slim down peer-review to the barebones: simply make sure that an article isn’t complete nonsense and then publish basically everything. This is a model of several open access journals. Typically, there are production costs financed through author publication charges. Colin Philips reports positively on the experiences editing such a journal.
There are other possible recombinations of various ingredients of the journal system. I am very excited about the possibilities.
In the spirit of rethinking all aspects of scholarly communication, even a now firmly established journal like S&P should be nimble and consider ways of making things (even) better. Here are some thoughts and questions (my own, not yet discussed with the other members of the editorial team):
- How can we address the major pain-points in the production process? We do not outsource to disciplinarily naive companies, but rely on graduate student labor. This is not the most time-efficient way of doing things, even if the end-product is superior. If linguists were as proficient as mathematicians or computer scientists in the use of LaTeX, we could probably reduce the time from acceptance to publication, but that doesn’t appear to be a realistic scenario.
- I think we might highlight articles that have been accepted in a way that reduces the pain of waiting for the official publication. Maybe, S&P’s homepage should link to the author’s final version of accepted papers right away (perhaps even with an assigned DOI). This way, they could be listed on CVs as published, for all intents and purposes.
- I’d like to think about acknowledging the work of reviewers more openly. Perhaps, reviewers should have the option of being named as having helped a particular paper in the process to publication.
What other things should we think about?