MDPI Journals: 2015 -2021

(For a PDF version of this blog and the data used to produce it please go to the end of this blog)

In this blog I report on growth of MDPI journals and papers from 2015-2021. It updates previous blogs on the same topic (the most recent is here) that looked at growth up to 2020. The issues which make MDPI’s work interesting are now well discussed those blogs, and in Paolo Crosetto’s analysis. I have therefore not provided a detailed commentary on the most recent findings, rather I wish briefly to highlight two developments that are clear in the 2021 data. These are that growth continues and that low rejection rates are lower than they were previously.

The methods I have used to compile these data are described at the end of this document. The data I have used are also available there. The data available consist of 206 of MDPI’s 378 journals, producing over 233,000 publications in 2021, which is a substantial majority of the 235,600 peer-reviewed papers published in that year. However data on submissions and acceptance rates are no longer available on the MDPI journals website (as of June 2022).  It will therefore not be possible to provide further updates.

1. Growth
By every measure MDPI’s growth continues to be remarkable. The rate of revenue increase has slowed in the last two years, to just over 50%, but even that remains extraordinary.  Note that the proportion of submissions that are published has increased, from around 44% two years ago to over 55% currently (Table 1; Figure 1).

Figure 1

Growth continues in other ways too. In 2021 MDPI launched 84 new journals (!) and acquired two more existing journal titles. There were 378 listed titles in June 2022. There is also growth in formal recognition, with 85 journals now listed with Impact Factors in the Web of Science, up from 80 in the previous year.  

2. Rejection Rates Are Lower
The growth in publications is partly sustained by lower rejection rates. The journals with the lowest rejection rates used to count for only a minority of publications and fees (Tables 2-4). Now figures for 2021 show that journals with low rejection rates are producing a higher proportion of MDPI publications. This fact is important because I have questioned claims that MDPI is a form of predatory or vanity publishing because the rejection rates of its journals were too high. Researchers were not simply paying for publication, because, in previous years, more material was rejected than was accepted.

Now, some 45% of the MDPI journals I analysed, have rejection rates of below 40% (Table 2). Papers in these journals account for nearly 38% of revenues from publication fees (Table 3). Conversely, the journals with rejection rates of over 50% account for just over 25% of revenues. Measures of esteem, such as listing in the Web of Science, did not seem to make a difference to rejection rates. Average rejection rate for WoS listed journals was 42.7%, and for unlisted journals 41.6%.

The changing importance of low rejection rate journals for revenues is illustrated in Figure 2. These graphs show journals in rank order of their rejection rate (lowest first) and shows how revenues from publishing fees increase as each journal is added. It presents figures in absolute numbers (the first graph), and as a proportion of all publishing revenues in this data set (the second). The latter brings out most clearly the financial impact of the low rejection rates.


It is important to put this shift in perspective. MDPI has always maintained that it will encourage publication of papers whose findings are true, regardless of their significance or scope. I discussed this issue in my first blog on this topic (see point 4). Significance and scope are to be determined by the reading community after a paper is published, not by the reviewers and editors beforehand.

However the corollary of this approach, given the growth in publications, is that it is more likely that these journals are now publishing papers that will be deemed insignificant by the research community. It will become harder to find really good and important papers in all the volume of material generated. This problem is compounded by the fast turn-around times of papers (a median of 38 days from submission to publication in 2021) because the speed increases the risk that mistakes are made in the review and revision process.

MDPI itself has been aware of the dangers of being too inclusive. In its 2015 annual report it noted that the overall rejection rate had increased since last year (from 52 to 54%). This achievement was listed in one of the key performance indicators as a sign of progress. MDPI’s shifting stance can be best seen in Figure 3. From 2015 through to 2019 the number of publications grew, generally keeping within a rejection rate range of 50-70%. The average rejection rate, weighted by journal output, is shown by the green line. Indeed in 2019 more papers appeared in journals with rejection rates over 60% than previously. However from 2019 onwards the weighted average has declined (shown in the red line).

Nevertheless, I would not conclude from these changes that MDPI is now a predatory press. I think it is rarely helpful to use categories of journals such as ‘predatory’ or ‘not predatory’ in the current publishing ecosystem. All major publishing houses profit from free researcher labour. Some exploit individual academics and their research funds through charging licencing fees, others raise revenues from libraries and subscription charges, and others make deals with governments to cover entire University sectors. Some combine all these methods. In the process each publishing house presents a range of behaviours on dimensions such as transparency, profitability, exclusivity, giving back, citation and self-citation, respectability and so on. MDPI is shifting within these dimensions; it has not changed category. Using categories to explain these dynamics will not help us understand the problems of academic publishing.

We can predict that if this pattern is sustained it could affect MDPI’s brand and reputation. This could happen if researchers find too many weak or insignificant papers in the journals they are consulting. Or it could happen if it becomes widely known that a journal is relatively easy to publish in. This would reduce the kudos of having a paper accepted.

However, because acceptance and rejection data are no longer available on the MDPI website, we will not know what is happening to rejection rates. We cannot know, at the level of each journal, how inclusive they are, or are becoming. This points to a wider need for all publishing houses to be more transparent with the data of their journals to allow researchers to make informed choices about their journals. MDPI’s transparency had been welcome. It is now, unfortunately, following the standards set by the other publishing houses.

Dan Brockington,
Dar es Salaam,
10th November 2022.

Methods

In June 2016 I copied data for submissions and publications for 206 journals that were available on the MDPI website that began in or before 2015. At that time MDPI made publication data available for four previous years, and only when four years of data were available. Data I used in my analysis are available here.

Average net APC are based on figures provided by MDPI that have been put in the public domain. They are lower than APC charged because they are net of all waivers exercised. None of the figures presented above account for the effects of inflation.

A PDF of this blog is available here.

Advertisement
Posted in Uncategorized | 5 Comments

A Checklist of Questions for Working with Open Access Journals

This blog concludes two previous blogs on MDPI’s recent growth, and a survey of people’s experience of working with them. It should also be read in conjunction with Paolo Crosetto’s recent exploration of MDPI’s growth

The meteoric growth of MDPI is one of the recent surprises of academic publishing and part of a broader shift in publishing. Elsevier have produced a raft of new ‘X’ journals with online access. Springer has expanded the lucrative ‘Nature’ brand with a raft of new titles. Companies like Frontiers are also growing (with investment from Springer). The rise of MDPI has to be understood in the context of increasing opportunities to publish, and desires of publishing companies to capture the lucrative charges researchers are prepared to pay to publish, and the free labour researchers provide when reviewing, writing and revising.

A frequent refrain among the survey I conducted about experiences with MDPI was the complaint that the publication landscape, and the behaviour it encourages and exploits, was the problem. I don’t think it helps to declare this situation to be ‘unsustainable’. That merely means that it cannot continue in its present form. But since its present form is the result of change, and since we know it will change, pronouncing it ‘unsustainable’ does not tell us anything at all.

We can say that it produces strange dysfunctions. It creates ever greater pressures to publish, which makes people miserable. It produces more work than we can ever read (see here). It adds to the mistakes and falsehoods that circulate as ‘science’, which decreases public trust in our work. It decreases scientists’ trust in science too.

Research communities like to feel looked after. We want their funders, governing institutions and publishers to have our best interests at heart. In the case of journals this means that we want journal editors to protect us from having to review bad papers. These should be desk rejected before review. We want journals to shield us from faulty publications, which use up reading time, and can entail unwelcome distractions compiling responses. We want to be able to understand colleagues CVs, such that lists of publications indicate real achievement.

The research communities of which I am part do not feel looked after by academic publishing. The more we learn about it the more we feel exploited. And our own behaviours and incentives make it easy for us to be exploited. The relationship between academics and publishing is sometimes like that between alcoholics and a liquor store. We need each other. But this is not a wholesome relationship.

There are moves to tackle some of these dysfunctions. And it will require interventions by collectives of Universities, governing departments, and the institutions which guide and fund research. The sort of structural change required cannot be effected by individuals. But there are questions which we can ask ourselves that will help to push things in the right direction. I have listed these below, that are prompted by the survey of MDPI experience. They apply, however, to all manner of companies.

If you work in library and information services at a research institution

  • Do the research and departmental managers at your institution know what payments (for subscriptions and open access licences) are to different publishers?
  • Are they using this information to advise their colleagues about where to publish findings?

If you are a senior or mid-career researcher

  • Does your publishing, journal support and guidance to junior colleagues promote the best open access outlets and practices?
  • Do you know how much it costs your institution to access the journals in which you are publishing?

If you are a junior researcher who is thinking about acting on a request from a journal to do something

  • Does this journal support the best standards in open-access publishing?
  • How much does it cost your library to subscribe to this journal?

If you are the editor of a journal

  • Are the targets your journal’s support staff face going to encourage good email behaviour and rigorous review and revision?

If you are thinking about editing a Special Issue

  • Do you know what the word ‘Special’ means?
  • Is it apposite for the journal in question?

If you are willingly a member of a journal’s editorial board

  • Can you recognise the names of more than 25% of the other members?
  • If not, or if that presents you with more than 50 names, then have you undertaken any activities in the last 12 months which means that you are adding to the intellectual work of the journal?

If you a critic of an open access publishing company

  • Are you promoting effective open access publications through other sources?
  • What have you done for an Elsevier journal in the last year?

If you are a manager in a publishing company

  • Are the incentives, rewards and sanctions of your staff encouraging healthy interactions between them and research communities?
  • Are levels of pay enough to encourage employees to stay on and build memory of good practices within specific journals?
  • Are you monitoring the right metrics to determine which journals are enhancing the company’s reputation?

If you work at a journal and are about to email someone to ask them to review a paper or edit a Special Issue

  • Does this person know about the topic in question?
  • Have they been emailed recently by any journal in your company?
  • Have they asked not to be contacted by your journal?
  • Do your proposed deadlines fit the working days available in your correspondent’s country?
  • Does your email give the option of opting out of receiving further emails?

If you are researching a company’s trends and data

  • What kind of nerd are you?
  • Does your family think you are a complete loser?
  • Are your friends still talking to you?

If you are a friend of someone researching publishing trends and data

  • Couldn’t you at least have filled out his survey?
  • When are you going to talk to him again?

Dan Brockington,
Glossop,
27th April 2021

Posted in Uncategorized | 2 Comments

MDPI Experience Survey Results

What’s this all about?
Earlier this month I undertook a survey of experiences of MDPI among researchers. 1168 people took part. You can read the full report here, and download the data I collected here. You can read about the context of this survey – namely rapid growth in MDPI journals here.

How did you find 1168 people who wanted to talk about a publishing company?
I disseminated the survey on Twitter. The survey was only 3 minutes long and I sent it to my network. I also searched for people who had mentioned ‘mdpi’ in their tweets and replied to these tweets asking them to look at the survey. It made quite a hit, over 350 people retweeted the survey, and the interest was such that MDPI also promoted it too.

But if people are talking about MDPI on Twitter then they are either celebrating their papers and Special Issues, or complaining. Doesn’t your method simply produce polarised results?
Well it didn’t in this case. There were lots of more ambivalent responses. I guess because when people retweeted then that brought in a network of less strongly held opinions. But you are right to be cautious. We cannot assume this survey is representative of the research community.

Not representative? So what’s the point of it then?
It identifies patterns and associations in views about MDPI journals, and tells us what features different groupings share. A more systematic sampling method can determine how common these groupings are.

I have a low boredom threshold and you’re reaching it. Give me the answer quickly.
There is no single answer.

Snore
But one of the more interesting findings . . .

SNORE
. . is how important responses to email communications are for people’s thoughts about the journals. Basically, lots of people are receiving a large variety of email from MDPI and many do not like it. You can see from the table below that the ‘communication appreciation index’ is pretty low for people who received three or more types of email – and it was quite hard to find people who received few types of invitation.

Table 1: Average Communication Appreciation Index and the Variety of Invitations

Career Stage0-2 types of Invitation to engage with MDPI3+ types of invitations  to engage with MDPI 
Junior-1.3-3.9 
Mid-0.3-4.6 
Senior1.0-3.1 
    
No. of RespondentsNo. of RespondentsTotal
Junior141359500
Mid46364410
Senior28229257
Total2159521,167

Scores in the Appreciation Index range from -10 (no desire to receive any communication) to 10 (wants to receive every communication)

Are you telling me that you asked 1168 people if they liked being spammed? Do you know what the word ‘research’ means?
I did not ask them that. But this result is important because even people who liked the company most did not always appreciate receiving so many emails. You can see that in the box of free text comments below.

Box 1: Issues with email raised by MDPI Enthusiasts

Too many emails, it is like spam. It is like a predatory journal in this regard. Some interesting journals and publications, but some politics should change.  

I have worked with one that is among the stronger of their journals . . . But, I get a number of requests from MDPI from journals that are not even in my area of expertise. So some requests for our work do not seem well thought out.  

They are relatively cheap and it’s easy to publish in them . . . But I hate the nagging, the angry emails because on Monday you haven’t replied to the email they sent on Saturday . . . I hate being asked to review irrelevant and often incomprehensible papers.

Way too many unsolicited emails.
Too many invitations for serving as guest editors on special issues and submitting to special issues . . . While l like MDPI’s publishing speed and model, these kinds of practices lowers its reputation in my (and others’) minds.

The publisher is fine – it’s those annoying e-mails that really tarnish their reputation.

Reviewed many times in the past for them, published with too. But became too much of a nuisance . . . Have asked them to take me off reviewer list, which they ignored and continuously hound me. Shame – as good journal’s with strong academic side.

They send me lots of email asking to edit a special issue. Lots!

But this box leaves out all the comments of people who are much more fed up, who aren’t ‘enthusiasts’ – why are you silencing them?
Qualitative research does that I’m afraid. The data we leave out can be as telling as those we include, and we always leave out a lot. I’ve put this group’s comments in because it was surprising for me that people who were enthusiastic about the company said things like this.

What do you mean by ‘enthusiast’ anyway?
That’s the second main finding. I was able to group respondents into five basic categories. There were those who are Hostile to MDPI without ever having engaged with it. Others seem to be Put-Off by their engagements with the journal. Some are Ambivalent about what they will do in the future. They are not sure if they will get involved or not. Then there are those who are Engaged and are definitely going to submit papers. The Enthusiasts are those who want to take on editorial duties.

And how do you know that these categories actually exist?
Well I asked people what they had done and will do with the journals. And the groupings that emerged had clear differences in terms of their communication appreciation indexes and brand index scores. Here’s a table with all the differences laid out and explained.

What are all these ‘indexes’, did people reply to you in numbers?
No, an index is just a way of converting categorical responses into numbers. If someone said, ‘yes I want to be invited to edit a Special Issue’, I might score that as ‘10’. Negative responses I’ll score as ‘-10’. Then I take the average. It’s all explained in the Appendices to the full report.

And can you explain that without sounding patronising?
Not really, no. I’m just not very good at that sort of thing. My daughters forbid me to talking to their friends, and just about everyone else, as a result.

I like your daughters. I thought good researchers were meant to produce decent pictures. Where are yours?
Well here’s a graph of how different disciplines responded to MDPI’s brand associations with ‘Rigour’ and ‘Importance’. Medics, Engineers and Humanities Scholars tended to Agree that rigour and importance were associated with the MDPI brand. Natural and Social Scientists tend to Disagree.

So what we think about the journals is decided by our discipline?
No. Discipline matters, but what we think about them is decided by our views on their rigour, importance and prestige etc – all things that I asked about in brand associations, and by our appreciation of communications. I conducted an ordinal logistic regression to tease out the exact relationships, which is included in the main report.

I bet you’re proud of that.
‘Ordinal Logistic Regression’ is hard to spell, let alone understand.

And were you able to do anything useful with this survey, like sort out whether the journals are well-reviewed or not? 40 days submission-publication is short.
That is short – and clearly deliberately so. A number of respondents emphasised how convenient and appealing that was. But of course there is no clear answer to this issue. I was told by some people about their excellent, rigorous – and fast – peer review experiences, and other people said that they were disappointed by it. I’ve included a selection of the comments below.

Box 2: Experiences of Review, with Respondent’s Engagement Category shown.

Poor Review ExperienceGood Review Experience
Review process can be somewhat random, with editors and reviewers without enough expertise. (Enthusiastic)  

I had a good experience with two of them and they are deemed to be reputable in my field. However, the quality of peer review and published papers is questionable for three other journals I was invited to write for or to do a peer-review. (Enthusiastic)  

I published 2 papers, both had good reviews, but the whole experience seemed less rigorous than with other journals. The emphasis of peer review seemed to be on a box checked rather than improving the paper. (Engaged)  

I was pleased by the efficiency and simplicity of publishing with them. I did however find the review process very hasty, with the reviewers providing very little meaningful comment. The editor seemed fine with this and did not raise any issues. (Engaged)  

Reviews do not seem to be done by researchers. I submitted a paper and 3 out of 5 reviewers were focused on the paper format (e.g. reference style). (Ambivalent)  

I have had normal papers accepted. The reviewers were poorly chosen in most cases, and the reviews were poor (short, lack of detail…). (Ambivalent)    
Submitted only two articles so far. So not much to say, except got some lengthy four reviews for one article that was rejected after major revisions, as two reviewers were not satisfied with revisions. (Enthusiastic)  

One paper was submitted there and received 4 reviews – total 17 pages of comments! It took lot a of time to revise and respond to reviewers comments. One of the reviewers disclosed their identity and was an expert in that field. It was accepted after two round of revisions. (Enthusiastic)  

I have had good experiences with MDPI. Fast and rigorous peer review (four reviewers, excellent and challenging reviews, 10 days after my submission). (Engaged)  

So far in my 2 first author papers I’ve had 4 reviewers each time and they were generally competent. There is always a “reviewer 2” but I’ve been happy with the rigour so far. (Engaged)

I published with them as an early-career researcher because they offered to waive open-access fees which for an ECR is rather convenient. It did go through a fairly thorough peer review (2 rounds, 6 reviewers in total). (Ambivalent)

I had a good experience with my one and only paper published with them. 4 reviewers, provided useful and prompt reviews. (Ambivalent)

So it’s just a case of separate academic tribes then. We find the people we agree with about MDPI journals and hang out with them.
No. You’re missing the point that these groupings – these tribes – are changing and evolving. On the one hand MDPI is growing. More and more people are publishing with them. On the other hand that very success is creating tensions and strains. It’s producing unwelcome email requests. And the increased speed is worrying some respondents. There was a sense in the survey from some that they did not like the way things were heading. After all, peer review is meant to work all the time, not most of the time. I’ve reproduced a selection of those views below.

Box 3: Changing Opinions about MDPI over time

In my perception, MDPI overall has a neutral or even slightly negative reputation. MDPI journals are very heterogenous, with the quality of journals depending a lot on their respective EiCs and Editorial board members. I am the [redacted] of an MDPI journal, try to keep rigorous standards, and regularly have to deal with negative spillovers arising from problems with other MDPI journals. (Enthusiastic)

MDPI could improve its reputation by cutting back on the number of journals and special issues (so that special issues are actually special), avoiding sending irrelevant invitations and other predatory practices. The poor reputation of MDPI as a whole is damaging the more reputable MDPI journals that are trying to publish quality research while also offering authors a good service. (Enthusiastic)  

I have edited two special issues and served on the editorial board of [redacted]. I stepped down in [redacted] because I felt uncomfortable about the very rapid increase in no. of associate editors, no. of papers published, no. of special issues. (Engaged)  

MDPI . . is spamming special issues, some of which do contain very good papers. My own experience as reviewer was good so far, and as author mixed. I wouldn’t consider it predatory yet, but it is moving in that direction . . MDPI is rapidly degrading its reputation and the sympathy they had in my field as OA-publishers at the moment. (Engaged)  

I strongly disagree with the MDPI policy of launching innumerable special issues. I have refused many invitations to edit special issues (I do not even reply to these invitations), these are really a nuisance for me. (Engaged)  

Although my experience in publishing and reviewing for MDPI has been good overall, news and opinion by people in academia that it is a predatory publisher (despite some journals being really good and some still being not and have room to improve) can affect the view of my institution and superiors too, and may not allow us to publish in MDPI journals as it affects the reputation not just of the institution but affects my performance evaluation too and so recently I opt not to publish or review in MDPI journals. (Engaged)  

When I started my PhD and then transitioned to a postdoc, my opinion on MDPI journal was very good. I liked the open access part of it and that papers were published quickly . . . [it] was a just fantastic. Then I was invite to review a paper, then another one, then the list grew. Then I became a guest editor, then a member of the editorial board. This allowed me to see MDPI from the inside and I quickly realized what their real priorities are. Everything is done as quickly as possible, everything has a short due date, and everything is a part of a special issue so that the guest editors do all the work. For free, of course. I do not regret having my papers published in MDPI before, and do not say I will not submit my papers to MDPI again. However, their journals will never be my first choice. Further, every time I read a paper published by MDPI I am extra careful and suspicious of the quality of the presented research. (Ambivalent)

So the growth won’t continue?
There is no way we can conclude that from this survey. There are cracks appearing. But we don’t know how wide and deep these cracks are. We don’t know what patterns like this look like for other publishing houses. And MDPI journals are incredibly varied too.

Remember that people have been worried about MDPI’s practices for some time, and growth has continued despite that. Paolo Crosetto has argued that, because of recent changes, growth is now not sustainable (here), and Volker Beckmann has replied (in a comment to that blog) that the growth demonstrates the high quality of service these journals provide, and therefore will be sustainable.

However the difference now is that a few years ago not many people had published with MDPI and not many people had heard of them. But the growth in the number of authors has been so great (see Table 2 below) that this is no longer true. I’ve discussed this in a previous blog which explores trends in growth of MDPI journals from 2015-2020. The scale and speed of MDPI’s success means that reputational issues simply matter more. My hunch is that MDPI will have to be more careful about its emails, and possibly slow down MS processing in some journals.

Table 2: Authors by Region of their Institution in MDPI Journals

Region20162017201820192020
Aus-Pac 1,077 1,629 2,747 3,914 5,852
C Asia 197 492 1,128 2,518 5,309
E. Asia 13,432 20,982 39,323 65,128 77,389
Europe 9,744 15,906 29,988 56,346 103,155
L Am, C 918 1,728 3,174 5,441 8,701
MENA 756 1,112 2,111 4,279 8,060
N. Am 5,112 7,606 12,156 17,657 25,975
S. Asia 412 685 1,306 2,487 3,462
SE Asia 623 980 1,754 3,131 5,618
SS Afric 372 608 865 1,386 2,516
Total 32,643 51,728 94,552 162,287 246,037

Source: Country level data provided by MDPI

Are you ever going to form a definitive opinion about anything? Have you just asked 1168 people to fill in a survey so that you can say ‘errrm’?
You’re very clear minded. Please do analyse these data to see for yourself what the patterns are. And please look at the full report to get into the nuances. You know you really want to. And then you can tell me where I need to be more decisive.

There’s a lot we’ve learnt from this survey. The overabundance of email is one thing, and that suggests it will be useful for Editors-in-Chief to monitor the ‘hit rate’ of email requests their journals send. If these start going down, then clearly people are disengaging from their journal. We’ve shown how important discipline can be, and how distinct academic communities may sustain, and be sustained by, different journals. This will help sampling strategies of future research. And we’ve shown that brand reputation can bleed across the journals. Which means that journals’ leadership may need to take an interest in each other’s activities (like review glitches or email hit rates).

And finally, remember that MDPI journals are part of a broader publishing ecosystem. We have to consider our responses to these journals within that wider context. I think this survey prompts a series of questions for different groups of people – MDPI staff, Editors, Authors, Reviewers, Critics. A companion blog presents these questions.

My thanks to all who have taken part in this survey and spread the word about it.

Dan Brockington
Glossop
19th April 2021

The full report on the survey is available here.
The data collected for the survey are available here.

Posted in Uncategorized | 3 Comments

MDPI Journals: 2015-2020

Update Nov 2022:
This blog has now been superseded by new data up to 2021
which are available here

In this blog I report on growth in MDPI journals from 2015-2020. It updates two previous blogs on the same topic (the most recent is here). It provides the necessary background for forthcoming results of a survey of over 1000 researchers about MDPI journals. The data used in this report are freely available here, and see the ‘methods’ at the end of this report.

The summary is that MDPI’s growth continues to be extraordinary. Following Paolo Crosetto’s recent work, I have revised arguments about the tensions that this growth brings between growing revenues and maintaining standards that are now clearer than in previous years. It provides useful background to a survey of 1k researchers and their experience of MDPI which is available here.

1. Growth, Growth, Growth
MDPI is interesting because of its sustained expansion rates. All the measures in Table 1 are around ten times bigger than they were in 2015. In 2020 the journals I have included received nearly 350,000 manuscripts, published over 160,000 papers, and earned Author Processing Charges (APC) revenues of nearly 200 million Swiss francs (Table 1; Figure 1).

Table 1: Growth in Submissions, Publications and APC Revenues of leading journals

YearSub’nsPub’nsRevenues (CHF)JournalsAnn. Rev. Increase
2015 39,125 17,379 14,424,570148
2016 54,032 23,529 21,552,56415549%
2017 81,844 36,675 34,694,55017661%
2018 146,118 65,163 62,556,48017680%
2019 241,991 105,936 121,190,78417694%
2020 342,025 162,319 191,536,42017658%

Revenues are calculated from average APC recovered per paper (see methods). They do not account for inflation

Figure 1: Submissions, Publications and APC Revenue 2015-2020

2015, N = 148; 2016, N = 155; 2017, N = 176.

2. Rejection Rates Have Not Changed
The growth, and revenues, have not been achieved by lowering rejection rates. The journals with the lowest rejection rates have continued to count for only a small minority of publications and fees (Table 2 and Table 3).

This fact is important because observers sometimes claim that MDPI is a predatory press. I understand predation to mean a form of vanity publishing, in which researchers can, effectively, pay to get anything published because very little is rejected. I cannot see how this is claim can be sustained for MDPI journals as a whole. Around 30% of MDPI journals have rejection rates of below 40% (Table 4), and these consistently account for less than 20% of revenues and published papers. Higher rejection rate journals account for the majority of publications and revenues.

Table 2: Papers Published in Different Rejection Rate Categories

Rej Rate (%)201520162017201820192020Total
<402,533 2,962 5,497 11,101 10,257 28,507 60,857
40-49 1,560 5,320 8,423 11,304 19,350 34,379 80,336
50-59 8,679 8,228 14,063 22,078 46,349 75,961 175,358
>=60 4,607 7,019 8,692 20,680 29,980 23,472 94,450
Total 17,379 23,529 36,675 65,163 105,936 162,319 411,001

Table 3: Estimated APC from Different Rejection Rate Categories (000s)

Rej Rate (%)201520162017201820192020Total
<402,102 2,713 5,200 10,657 11,734 33,638 66,045
40-49 1,295 4,873 7,968 10,852 22,136 40,567 87,692
50-59 7,204 7,537 13,304 21,195 53,023 89,634 191,896
>=60 3,824 6,429 8,223 19,853 34,297 27,697 100,323
Total 14,425 21,553 34,695 62,556 121,191 191,536 445,955

Table 4: Number of Journals in Different Rejection Rate Categories

Rej Rate201520162017201820192020Total
<40 44 5665 69 55 60 349
40-49 26 31 48 33 37 42 218
50-59 45 26 26 33 47 43 220
>=60 33 41 37 41 37 31 220
Total 148 154 176 176 176 176 1,007

Note: All these table assume genuine rejection and no ‘churning’ of articles. Ie if an article is rejected it is not then modified and then re-submitted.

3. Growth Begets Growth
Figure 2 shows publications in 2019 against 2020. The red dotted line indicates static output, the black dotted line a doubling of output. Note the logarithmic scale.

As in 2019, the growth in 2020 was driven by large journals becoming more popular – all the blue dots in the top right quadrant are comfortably above the red line. Greatest growth given a journal’s size is found in the middle range journals which publish between 100 and 1000 papers per year. Declines (blue dots below the red line) occur in the smaller journals (bottom left hand side of the graph). Far more journals are expanding than contracting.

Figure 2: Change in Publications 2019-2020 for each journal

The dotted red line represents no change 19-20. The dotted black line represents publications doubling 19-20.

4. Joining the Mainstream
In 2020 as in 2019 MDPI journals are becoming more mainstream. 152 are now covered by the Web of Science (up from 85 in 2016), 163 by Scopus (up from 60 in 2016) and 80 by SCIE (up from 27 in 2016).

5. A Growing Global Reach
Increasing numbers of researchers globally seek to publish in MDPI journals. All regions display a dramatic increase in authors over a 5 year period, generally quintupling in each case. But the most remarkable growth, in relative and absolute terms, has been in Europe where there has been a tenfold increase in authors to over 100,000 in 2020.

Table 5: Authors by Region of their Institution in MDPI Journals

Region20162017201820192020
Australasia Pacific 1,077 1,629 2,747 3,914 5,852
Central Asia 197 492 1,128 2,518 5,309
Eastern Asia 13,432 20,982 39,323 65,128 77,389
Europe 9,744 15,906 29,988 56,346 103,155
Latin Am. & Carib 918 1,728 3,174 5,441 8,701
MENA 756 1,112 2,111 4,279 8,060
North America 5,112 7,606 12,156 17,657 25,975
South Asia 412 685 1,306 2,487 3,462
South East Asia 623 980 1,754 3,131 5,618
Sub-Saharan Africa 372 608 865 1,386 2,516
Grand Total 32,643 51,728 94,552 162,287 246,037

Source: Country level data provided by MDPI

The focus on the Global North is even clearer if we consider the distribution of MDPI editorial board members in its leading journals. This is not a particularly exclusive club as the top journals average over 1000 board members each. Stephen Bell’s work shows that these are mainly from Europe and the USA, with over half coming from Italy, the USA, UK and Spain.

Figure 3: Editorial Board Member Geography

Source: Stephen Bell, https://twitter.com/1smbell/status/1299037227889102848

6. A Hungry Publisher
The trends I have summarised above fuel an unease in research communities that this growth is too quick, and that MDPI’s standards are lax. Some observers dismiss MDPI as a predatory publisher, taking fees but not scrutinising work carefully enough. In my previous blog I discussed those claims and complaints at some length. At the same time others insist that their experience has been good, and that problems are specific to some journals and not systematic across the company. They welcome the alternative that MDPI offers from mainstream publishers that are slow, occupied by disciplinary gatekeepers and which are often plainly profit-seekers themselves.

The available data on the review process are ambivalent. Table 6 shows that reviews per publication have declined slightly since 2016, indicating less review and scrutiny of published papers. However reviews per submission have also declined, indicating more desk rejections, and either lower quality submissions and/or higher scrutiny of submissions before review.

Table 6: Trends in Review Reports

YearSub’nsPapersReview ReportsReviews per Pub’nReviews per Sub’n
201658,57523,568128,8055.52.2
201782,10035,900171,7004.82.1
2018165,50067,300337,3005.02.0
2019269,100106,200451,9004.31.7
2020381,100165,200610,3003.71.6

Source: MDPI Annual Reports

In some respects my position remains unchanged from that which I first published in 2019. I see MDPI as presenting a mixed bag of good quality and haste. Its journals are many and varied. But, on the other hand, this variety is governed by a single company’s policies and practices. The incentives, rewards and targets for different journals’ support staff are likely to be similar. It is therefore important to look not just at the variety, but also the commonalities at work.

This points to two concerning trends that have emerged from recent work by Paolo Crosetto.

1. The growth of Special Issues. As I explained in my first blog, Special Issues are a key vehicle for MDPI’s growth. Special Issues draw upon academic networks, making author recruitment and paper reviewing easier and faster. I had not anticipated, however, how extraordinary the growth in Special Issues would be in MDPI journals. Paolo shows that a great deal of the growth in the largest journals is due to the phenomenal increase in their special issues. This is clearly visible in the number of Special Issues released (8.7 per day in the case of Sustainability (!) (Figure 4). It is also plain in the number of papers which Special Issues recruit, relative to standard non-special issue papers. Without Special Issues these journals would be producing far fewer papers. Special Issues underpins the growth of the journals which I have described above.  

Figure 4: MDPI Special Issue Growth and Processing Time

Source: Paolo Crosetto, https://paolocrosetto.wordpress.com/2021/04/12/is-mdpi-a-predatory-publisher/.

2. The decline in paper processing time. Paolo’s work also makes clear that the time from paper submission to publication is becoming increasingly concentrated, and shorter (Figure 5). This is one of the things that the company is proud of – it highlights faster processing time in its annual reports.

I cannot see how these trends results from a concern for quality. A vast increase in papers makes more mistakes morel likely, especially as their scrutiny and revision gets faster. Haste may not necessarily lead to mistakes, but it makes them more likely. There are, however, obvious commercial reasons to speed up, and bulk up the papers, the review process. If 10,000 publishable papers are submitted every month then, over a two year period, the company will earn revenues over 140 million Swiss francs from these papers if it takes 12 months to process them for publication (review, revise, copy edit etc). Only those submitted in year one will begin earning revenues in year two. But if these same papers are published within 40 days of submission then the returns come in much faster. Over a two year period it would earn over 250 million Swiss francs. Faster processing time produces more lucrative returns. Paolo argues that MDPI’s growth is best understood as aggressive rent extraction from journal reputations. MDPI is not a predatory press, as I understand the term. But it is certainly a hungry one.

Figure 5: Changing MS Processing Times

Source: Paolo Crosetto, https://paolocrosetto.wordpress.com/2021/04/12/is-mdpi-a-predatory-publisher/.

Discussion
MDPI presents a conundrum. The success of MDPI journals cannot be explained by lax standards. They have grown because submissions to them have grown. They offer a fast service, and relatively cheap, open access publication. Their reputation, as measured in citation data and listings on indices, continues to grow. There is an increasingly global community of researchers that wants to publish with them. But that growth has not shaken off the unease that accompanies these journals. If anything the anxieties may well have increased because this growth entails practices which can make mistakes easier. The financial reasons for these practices are more obvious than their academic merits. It is not clear how quality is maintained, or standards increased, as publications have accelerated while processing times have decreased.

Or, to put this point differently, it is mistaken to tell one story about the MDPI. Some journals have good reputations. This may be because they are not expanding dramatically. Of 148 journals for which data are available, 95 have consistently published less than 200 papers a year since 2015 (averaging only 75 per year).

But if it mistaken to tell only one story about MDPI journals, it would also be brave to tell more than two. The strenuous efforts of some journals’ editors and staff to do good rigorous work is one story, but the dominant, and most prominent, tale is of growth. 94 of 176 journals for which data are available increased the number of papers they published by at least 25% in 2020. These accounted for over 90% of the estimated APC of the 176 journals – and over 170 million Swiss francs. Growth is the story that MDPI journals like to tell about themselves. It is a clear theme in the company’s annual reports.

In my previous blogs I have argued that we have to understood why MDPI works so well for those who choose to publish with them. I differed from critics of MDPI because I did not ask ‘this is wrong, how can it happen?’, but instead ‘this is happening, for whom does it work?’. However that latter question makes it hard to spot emerging tensions and contradictions. These are now more prominent and I must amend two views I expressed in my initial blog about MDPI’s success.

First, I suggested that weak papers that got through the review process would get buried under the mass of content. The consequences of mistakes for the brand as a whole were low. But that is fails to recognise that the existence of a poor paper is only part of the problem. There are also the collective memories among authors, editors and reviewers of the processes that produced it. If production proliferates, so too can accounts of problems. Even if they are, in relative or absolute terms, few, they can become the story that over-shadows the achievements of the brand.

Second, I mistook the importance of prestige in academic journals. I argued two years ago that MDPI does not seek prestige by rationing space. It wants significance and importance to be determined by the use researchers make of its papers. Uptake was the ultimate measure of significance. However perceived prestige still matters for uptake. Given the abundance of reading material confronting researchers, we tend to put up filters that block out some work, and promote others. Hence we will simply not read large swathes of material that do not appear to be from reputable sources, and we will go out of our way to find work from good authors, and to track the latest publications from the top journals of our field. This means that even good material appearing in poor quality outlets will be ignored, whereas poor paper from the best journals will be heavily scrutinised, even (as is the case in my field) when dreadful rubbish is repeatedly turned out by the best journals. The prestige game cannot be avoided. The danger to MDPI’s brand is that if some journals chose to maximise revenues, then even those which are publishing more slowly will be tarnished. The low prestige of some journals may leak across to others.

There have been claims that MDPI journals are weak and hasty for several years, even when reviewing times were slower, and Special Issues fewer. I discussed them in depth in my second blog. These concerns have not impeded the growth of MDPI’s journals so far. Indeed MDPI has probably grown because of the rapid, convenient service it offers and multiple Special Issues, not despite these. Furthermore MDPI’s own survey work into authors’ satisfaction with their work, shows a positive company, in excellent health (Figure 6). If these data are reliable then MDPI is already locked into a virtuous circle of growth that will see ever more researchers seeking to publish in its apparently space, and ever more reviewers keen to earn the APC reductions their labour affords. This is Volker Beckmann’s argument in his comment on Paolo Crosetto’s blog.

Figure 6: Author and Reviewer Satisfaction

Source: MDPI Annual Report for 2020 page 22. Methods and Sample Size are not reported in that publication.

But if the concerns I have mentioned have not had a visible impact on growth so far, they may do so in the future. Crosetto argues that MDPI’s growth may be unsustainable because of the demands it will place on research communities. Review reports, for example, have increased from 91.5k in 2015 to 610.3k in 2020 (Table 6). And behind that growth is hidden a plethora of email traffic. Submitted papers generate multiple review requests. Quick review and revision entails frequent reminders of reviewers and authors. All this creates incentives for a proliferation of emails which will not be welcome by researchers suspicious of MDPI’s brand. If there is a group of people who are choosing not to engage with MDPI (and Figure 6 can tell us nothing about those people), then they are likely to become more exasperated, and more vocal as MDPI requests increase. Past successes may even create the conditions for future problems.

In order to understand these tensions we need better data on what different communities of researchers think about MDPI journals and their processes. We need to understand the extent to which those views reflect personal experiences, and to what extent they reflect shared stories. We need to understand how these perceptions vary according to discipline, career stage, geography and institution. Some of this information will be available in the short survey of MDPI experience I launched earlier this month. Its findings will be the subject of a companion blog.

Dan Brockington,
Glossop,
 16th April 2021

Methods

I copied data for submissions and publications for 176 journals from the MDPI website that began business on or before 2015 and which handled more than 10 papers in their first year. MDPI makes publication data available for four previous years, and only when four years of data are available. The number of journals included is lower in 2015 as data were not, and are not, available for those journals. Data I used in my analysis are available <here>.

Average net APC are based on figures provided by MDPI that have been put in the public domain. They are lower than APC charged because they are net of all waivers exercised. The figures I have used, in Swiss Francs, are:

2015: 830
2016: 916
2017: 946
2018: 960
2019: 1144
2020: 1180

I requested data on authors’ institutions’ countries from the MDPI and am grateful to the company for sharing them. My thanks to Stephen Bell and Paolo Crosetto for sharing their data and analyses with me and for permission to reproduce the figures above.

Posted in Uncategorized | Tagged , , | 12 Comments

African Fiction for Older Children

Publishing has an obdurate diversity problem that it is slow to address. The publishing industry in the US is persistently too white. In the UK, Booker prize winner Bernandine Evaristo has complained that there are still too few Black British novels published. The hastag #ownvoices which seeks to promote marginalised voices is only a recent creation.

This problem is plainly apparent in children’s literature. Indeed in the UK the first survey of diversity in children’s books was only conducted as a late as 2017, revealing that just 1% of books published in that year had a main character who was a person of colour. In the US there are more data, which also demonstrate a sustained lack of diversity that is only beginning to shift.

An important part of the diversity problem is the lack of attention given to African authors, publishers and writing. James Murua’s influential blog tackles that head on – and the fact that it had to do so, and gets so much traction for doing so – demonstrates the significance of the gap it addresses. I want in this blog to examine a particular type of children’s literature: books, published in English, which feature African main characters in African settings and are aimed at children aged 10-13. I shall call this type of book ‘African fiction for older children’. I began exploring this topic after writing my own book for this age group (called The Grymcat Conspiracy) that is centred on a Tanzanian family and storyline.

Books like this are really important. Fiction in which African children’s lives, dilemmas, issues, themes and settings are dominant, is vital for children’s creative and personal development. As Ruby Goka, the prize-winning Ghanaian novelist observed, ‘the more African children see themselves reflected in the pages of books they read, the more they will dream and know they can be anything they want to be.’ In preparing my book I sent a copy of the MS to children at a Tanzanian international school and one of them said ‘I want to read more because this is an African story’. Pule Lechesa complained that, ‘[w]e must be honest about it, many – if not most – of celebrated children’s literature in the western world leave our African children cold and disinterested’ (quoted by Raphael).

We must put this English literature in context. Many children will enjoy stories in their mother tongue, told by families and friends. There are amazing stories published in Swahili and Arabic. Novels written in English (or French and Portuguese), in languages of colonisation, will not be the only answer to children’s creative and imaginative needs. But books in English for children still matter. English provides access to some of the most powerful African literature not to mention multiple career advantages.

The production of African fiction for older children in the UK or US is not great. The two surveys which track diversity in these countries do not even record whether racial diversity features African characters. But it would be a mistake to think that the absence of such literature in these markets means that it is rare. Pule Lechesa went on to observe that ‘the great thing is that some African writers have managed to produce very fine work for African kids’ (quoted by Raphael). I have set out below a partial list of some of these writers who have written books for children from early readers to young adult.

Screen Shot 2020-08-10 at 13.54.15

From these authors, and hundreds of others, I have put together a list of nearly 700 works for older children that have been published since 1960 (available here). This will keep even assiduous readers busy for some time.

But it is also the case that there could be much more of this literature. Much of it is now out of print. There are currently around 20-30 titles in this age group published each year. Much of it comes from national presses based in Ghana, Nigeria, Kenya and South Africa. The major international presses produce relatively little – in part because they have diversity problem.

The leadership demonstrated by African publishing houses with respect to publishing African fiction for older children is welcome. But I suspect that authors, and readers, would welcome more outlets. Authors would welcome opportunities to get their work to larger, international markets. And children beyond the continent need to read these stories too.

There are three reasons why this could be, and should be, a growth industry. First, because this magic age group (also called upper middle grade, or tween) represents the time in children’s lives when they read the most. For example, here are graphs of books read, and time spent reading, in the US and UK respectively.

Screen Shot 2020-08-05 at 22.31.56

We do not have similar data for African nations, but the available comparative data tell us that children in the US and UK in fact read substantially less than many of their counterparts. The peaks of reading reported in the US and UK may be higher and larger in other countries.

Screen Shot 2020-08-05 at 22.33.26

It is quite possible, therefore, that older children in many African countries are ceteris paribus already reading more than those in the US and UK. We do not have the data to tell either way. The point remains: the available data suggest that this is the age when children can be reading the most

Second, the sheer numbers involved mean that publishers neglecting African markets risk being foolish. Currently there are 85.8 million children and young adults aged 5-19 in the combined markets of US, UK, Canada and Australia and New Zealand. In 2035 the young population in these five wealthy countries have increased to 86.3 million. But in Anglophone Africa there are currently 200 million children and young adults. In 2035 this will increase to 256 million people. Neglecting a market that size does not really demonstrate the best business sense.

These numbers mean that growth will be much easier here than in the older markets. Some 3000 new children’s/YA books are published each year in the US. This is for a market of 61 million potential readers. There is clearly an opportunity to grow demand for more books if only 20-30 African fiction books for older children are currently published every year. It is unlikely that this market is saturated.

A sceptic may be suspicious of these numbers, if high levels of poverty mean that these large numbers of children do not create effective demand. I would encourage such sceptics to be more sceptical of the data which underpin those beliefs. Data on wealth and economic activity in many African nations can be inaccurate. Ghana and Nigeria famously revised their economic status from low income to upper middle income overnight following improvements in government statistics. New research also shows that even respected data on poverty systematically ignore important assets in which rural people are investing (such as their children’s learning). Given that reading for pleasure obviously makes you cleverer, it is likely that families investing in their children’s education (and there are many of these) will also want to buy books.

The idea that there is insufficient effective demand also does not accord with the historical pattern in the production of African fiction for older children. As the graph below shows, this first peaked in the 1980s and early 1990s (when effective demand was lower than it is now). It then declined in the late 1990s and early 2000s.

Screen Shot 2020-08-10 at 17.21.01

The patterns here suggest that major publishers have difficulty comprehending the potential in these markets. For instance the most successful series on the continent which catered for the needs of younger readers is Pacesetters. But its creator, Agbo Areo, an editor with Macmillan Nigeria, had to write the first book in the series to convince his (UK) bosses that there was demand for these works (!) Marie Umeh describes how Flora Nwapa was marginalised by her publisher for being a ‘Third World’ author and had to establish her own publishing press to distribute her pioneering works.

Third, there is the fact that markets for children’s literature can be self-reinforcing. Demand grows demand. This is visible partly in the sheer numbers of books sold. In apparently saturated markets such as the UK (shown in the graph below) demand for children’s books is growing stronger and stronger. The population of book buyers in the UK is not growing, but the book market is.

Screen Shot 2020-08-05 at 22.32.21

It is also visible in the way that literature trends work. Writing about one particular topic does not make it harder for others to do the same, it makes it easier. H.G.Wells invented time travel – and whetted public appetite for more. No one seems to have thought, after Tolkein, that there is no more space for works featuring  dragons, wizards and dwarfs. The Worst Witch made it easier, not harder, for J.K. Rowling to invent Hogwarts. So as more African fiction for older children is produced, demand for it will grow accordingly. This is already visible in the flourishing of new genres like Africanfuturism.

The resurgence of publishing for this age group in recent years suggests that this need is beginning to be recognised. There are new global initiatives, and schemes in specific countries, which are finding ways of bringing exciting books to children who can access screens more easily than printed text. Prizes such as the CODE Burt Award for Young Adult Literature awards (now ceased), the Jomo Kenyatta Prize for Literature, the MacMillan Writer’s Prize for Africa, the World Reader Inspire Us Project and the Children’s Africana Book Awards have raised the profile of these books and their authors.

African authors will be writing the vast majority of African fiction books for older children that will be published. But they will not be writing all of them. Authors from many backgrounds will join in; I hope to be among them. But it is ground we, who are not #ownvoices, will need to approach incredibly carefully. Particularly for authors with European or North American backgrounds, writing about African characters and settings is replete with dangers of appropriation, mis-representation and white gaze. We have to recognise the biases and predelictions we bring to our subject matter.

In my own case, I have found that despite decades of learning about Tanzania, of working there as an anthropologist in remote rural areas, despite living there as a family, despite learning Swahili (and speaking it at home), despite working on research projects with colleagues across the country for years – all that still does not ‘deal’ with my white gaze. It doesn’t work like that. White gaze does not go away; we unlearn it very very slowly. Writing brings out deeply concealed prejudices because writing is such an intimate task. It lays us bare. We best to recognise this reality, the better to address it as it appears.

Writing about people who are not like us is essential to fiction. Zadie Smith’s frustrations with ‘cultural appropriation’ (here and here) and in Kenan Malik’s writing on the same (here) reflect that basic truth. But these writers’ argument is premised on the fact that any such must be really good, and deserves to be published. That takes a huge amount of work and research. Prof Sunny Singh’s questions in this thread and Sara Collin’s observations in this one, provide vital insights into how important that is. That is part of the respect required in representation.

As publishing companies wake up to the possibilities that abound in African fiction for older children, who knows what amazing stories, series and books will emerge. This field could lead to a flourishing of rich and varied representations, of empathetic characters and settings in which older children across the continent delight to immerse themselves. The more books like this that come into existence, the more demand there will be for them – and the greater the number of new authors who will be forged by the enchantments they read.

Dan Brockington,
Author of The Grymcat Conspiracy
Represented by Van Aggelen African Literay Agency
August 2020

Posted in Uncategorized | 1 Comment

MDPI Journals: 2015 to 2019

THIS BLOG IS OUT OF DATE. For a more recent analysis see this blog

In a previous blog (published December 2019) I explored the performance and changes of the MDPI journals, examining their growth up to the end of 2018. Since I wrote that blog, data for 2019 are now available – and they are more remarkable than before (Table 1). Submissions in 2018 were over 140,000. In 2019 they were just under 240,000. Over 64,000 papers were published in 2018; in 2019 over 100,000. Estimated gross revenues (see note below Table 1) have increased by nearly 60 million Swiss francs. A downloadable PDF of this blog and the source data are available at the end of the document.

Screen Shot 2020-07-23 at 18.37.11

In this blog I reflect on what these trends mean for the arguments of my last blog – specifically, does growth demonstrate signs of vanity publishing? I also reflect on the responses to the first open letter that I wrote to the MDPI. The headline findings are that I believe that the growth has continued at the same rate (if not greater) because the journals provide a service that increasing numbers of academics find useful. At the same time the experience of publishing with an working for these journals remains uneven.

Figures are derived from a subset of the more established journals for which data are available (N=154). Not all 218 journals in the portfolio are included because the more recent journals do not have sufficient data to present.

A. Trends in publications 2015-2019.

1. The sustained growth in submissions and publications are shown in Figure 1 below. They speak for themselves. It is worth, however, highlighting the dramatic financial returns MDPI is realising (Table 2). I would be surprised if there were many other sectors that can demonstrate these levels of sustained growth.

Screen Shot 2020-07-23 at 18.38.29

Screen Shot 2020-07-23 at 18.38.39

  1. The growth, and revenues, have not been achieved by lowering rejection rates. The journals with the lowest rejection rates have continued to count for only a small minority of publications and fees (Table 3 and Table 4). Indeed the number of journals with the lowest rejection rates has declined, and publications in them also dropped. 69% of published papers and 73% of fees derive from journals with rejection rates of over 50% (Table 5). These rejection rates are still not high compared to the strictest journals managed by other publishing houses (with rates of over 95%).

Screen Shot 2020-07-23 at 18.41.41

  1. Growth in 2019 was driven by already popular journals becoming more popular. The largest journals have continued to thrive, but there has also been a surge in submissions to the smaller journals (publishing between 100 and 1000 papers per year). A notable minority of these have seen publications more than double compared to 2018. The journals which show declines tend to be the smallest.

Screen Shot 2020-07-23 at 18.42.37

  1. MDPI journals are becoming ever increasingly mainstream, 137 are now covered by the Web of Science (up from 85 in 2016), 134 by Scopus (up from 60 in 2016) and 70 by SCIE (up from 27 in 2016).

In this respect my conclusions in the previous open letter remain unchanged. Despite doubts to the contrary, the success of MDPI journals cannot be explained by lax standards. They have grown because submissions to them have grown. They offer a fast service, relatively cheap, open access publication, and their reputation, as measured in citation data and listings on indices, continues to grow.

B. Responses to the first open letter.

There was considerable interest in the first letter I wrote, and it tended to fall into three camps. First, a number of observers were surprised. They, like me, were not expecting the results I produced.

The second camp proved more critical. These responses came from authors who had found the experience of managing special issues problematic because too many papers were published in them. They surmised, correctly, that I have never lead a special issue in an MDPI journal. I was directed to online discussions about the journals’ flawed processes (here and here) which dismiss them inter alia as ‘a predatory outfit’. Others pointed me to the protest resignation of one journal’s editorial board, and the temporary black-listing of the MDPI on Beal’s list.

I have found these engagements really useful. But they have not changed my mind about my conclusions. This is partly because a number of other editors and authors also got in touch with precisely the opposite sentiments. They were the third camp. They valued working with and publishing in MDPI. I was also not persuaded by all the criticisms. The MDPI were indeed on Beal’s list, but they are now removed from it. They are not found on any predatory publishing black list, and in fact listed on the DOAJ ‘white-lists’ (see here and here). One set of editors did resign, but there are over 200 other journals where the leadership are more content.

When I disagree with the critics it is not because I deny the validity of their experiences. It is rather that I do not think that this is the only story that can be told about MDPI. There have clearly been a series of problematic encounters between researchers and these journals. But I cannot see how these justify the conclusion, even on twitter, that therefore all the journals are flawed. These individual cases of bad practice do not amount to any systematic refutation of the data I have presented above. If these experiences were the general rule then why has there been this extraordinary increase in demand for publication that appears to derive from diverse research communities themselves? I was also concerned that one of the negative write-ups of MDPI journals was tinged with racism.

However while the critics do no persuade me, some of the criticisms certainly land. Most seriously there have been recent cases of inappropriate allocation policies of APC waivers (see here and here), in which journal managers have deliberately excluded authors from the global South. Full kudos to WASH colleagues for drawing attention to this. Because they did so MDPI central management apologised and reversed the journal’s decision.

In a recent tweet Arun Agrawal, who serves as editor of World Development, listed a series of complaints that render the publishing house problematic. First there is the problem of unwanted emails. In case anyone from MDPI is reading this then here are a couple of examples of unwanted emails which I have protested and which very few academics ever want to receive. Reminders to join Editorial Boards with already hundreds of board members are also problematic.

Then Arun questioned the lack of service to academic societies. MDPI reports that it works with 95 academic and professional societies. It is not clear however what financial relationships are involved. So Arun may well have a point, but then the same is true for many publishing houses. Are we happy for example, with the sums that Elsevier gives back to academia? How can we know at all what those sums are? There is a general lack of transparency across publishing houses as to what they give back to the academe which provides them with so many free papers to publish.

Finally, I found this comment of Arun’s particularly interesting: ‘remarkably high acceptance rates are major problems in @MDPIOpenAccess model. They are freeriders – If we all did same, quality would plummet.’

As I observed in my previous blog the speed of the MDPI review and revision process will make it easier for mistakes to happen. Median time from submission to publication is now 39 days. It is strange that MDPI has not moved to set up a more exclusive journal, such as a ‘best of’ format which promotes the best papers across its journals. It is also strange that it has not sought to limit the size of some of its largest journals. The more papers you publish, the more mistakes are likely.

However I do not believe that high rejection rate journals are policing quality alone. The history of academic is one of proliferating journals. New ones are set up by new epistemic communities because they cannot get past the gatekeepers of existing journals, or because existing journals do not focus enough on the discussions and issues that new schools of thought want to investigate. Journals, in short, do not just police quality. They police fashion and trends. MDPI’s model is to eschew all questions of fashion, scope or significance. It will publish anything that can get past its peer review. It is up to academic communities to determine how interesting and important these publications are.

This means that the journals are more likely to contain work that is rather inconsequential – and unfashionable. They will house fewer prestigious findings. But then it seems to me unreasonable to set up the upper ends of a hierarchy and dislike the lower ends from establishing themselves. Journals like those the MDPI house are the logical consequence of a broader system of hierarchical publishing prestige. They are not free-riders on the system. They are a corollary of competitive, prestige-driven, restlessly energetic researchers who will not believe that that rejection means that their work is unpublishable. The explosive growth of these journals was, in some respects, waiting to happen.[1]

As Universities grow in number and size globally, as researchers proliferate, and as we continue to insist that our work is not as boring and irrelevant as everyone else finds it, so it will become harder for existing journals and their epistemic communities to contain our enthusiasms and energies. This, I suspect, explains some of the demand for MDPI journals.

The success of their model does not, for me, invalidate the work they contain. Instead it poses a series of questions about how MDPI has coped with its own success. As Delia Mihaila (MDPI’s CEO) reply to my first blog indicates (at the bottom of this page), their growth has required hiring large numbers of new staff simply to process the MSes that the demand has generated. But learning how to do that, and the mores of dealing with papers and reviewers, is hard.

So the questions I would like to understand are:

  • How does the back-offices of MDPI cope with these dramatic increases?
  • How are standards maintained, how do they learn from their mistakes?
  • What best practices do they cultivate and how?
  • Where has the pursuit of volume compromised quality?
  • What internal management mechanisms are producing so many unwelcome emails and such questionable APC waiver decisions?

If we could explore these questions I think it would take us into more interesting territory than questioning the validity of these journals’ existence. MDPI’s growth and size is bound to be making other academic publishing houses, that we already know are seriously problematic, take note of their practices. It is possible that MDPI’s success could become a means of leveraging progressive change.

For instance it could raise standards of transparency. MDPIs practice of accessibly publishing rejection rates and APC charges for each journal is not standard practice in the sector. It should be. As I have urged before it could make a standard practice of publishing the APC revenues received and waivers given for each journal.

And finally MDPI’s success is worth understanding because it does pose a challenge. One relatively small company has gone from publishing obscurity (26 journals and 2000 papers per year in 2009 ) to over 200 journals and over 100,000 publications in just a decade. And this has demonstrated considerable revenue raising potential. What would happen if University libraries and academics were to organise themselves into a similarly powerful operation, but one in which all the revenues were returned to the research communities which produced them?

Dan Brockington,
Glossop, 24th July 2020

Methods

I copied data for submissions and publications for 154 journals from the MDPI website that began business on or before 2016 and which handled more than 10 papers in that year.

I took APC charges from the website but have had to estimate likely charges in some instances where APC for early years were unavailable. In 2018 and 2019 APC changed every 6 months. I have therefore calculated average APC for the year and applied that average to all papers published in that year. This is inaccurate as the lower fees apply to papers published in the first half the year, and higher fees in the second half. Nevertheless it provides a satisfactory short hand in the absence of better data.

The raw data I have used are available here.

A PDF of this blog is available here.

[1] I should add that I have only the highest respect for Arun. The fact that I have repeatedly failed to write anything good enough for World Development in recent years, despite multiple attempts, is entirely co-incidental to my argument here 😉

Posted in Uncategorized | 7 Comments

The apparent decline of ‘overseas development / famine relief’ charities in the UK

What do changing patterns in charities’ declared purpose and geography of activity tell us about the health of charities working on international development? In the records of the Charity Commission, charities describe their purposes according to a fixed number of categories. One of these is ‘overseas aid / famine relief’. The graph below shows a decided and dramatic decline in the registration of charities with that label. In the 2000s thousands of new organisations were established to support that cause. But, since 2012 enthusiasm for new charities with that purpose has waned. Only 25 (!) new organisations reported that purpose in 2017, and less than 10 in 2018. And yet more charities are being established each year now than ever.

The beginning of the end?

Untitled

Source: Charity Commission records downloaded June 2018

How do you respond to a graph like this? We suspect that a number of readers will think that this graph indicates that the development NGO sector is in decline. What had once been a vibrant cause stimulating new charity establishment has virtually disappeared. For these readers this graph captures a wider insularity across the country that is reflected in Brexit and the increasingly hostile environment to development aid.

But that conclusion would be hasty and quite possibly just wrong. These results surprised us, but they do not challenge the main conclusions that are emerging from our research project into the changing trends and fortunes of the development NGO sector in the UK. That work shows the development NGO sector is growing and thriving (both in number and finances), that the public is the mainstay of growth, and that public financial support is growing even as available household income is declining.

Instead this trend more likely indicates that the development sector, and especially its new entrants, are not well characterised by the ‘overseas aid / famine relief’ label. An education charity working in Tanzania will be doing development work – but may chose the label ‘education’ rather than ‘overseas aid’. Our database of nearly 830 development NGOs registered with the Charity Commission includes 320 who do not describe themselves as working on ‘overseas aid/famine relief’. Almost 70% of these were established since the millennium. The labelling has moved on.

So too has the marketing. It is possible that the development sector is thriving because it is re-inventing itself as a whole series of new, more specialist organisations. These new charities do not do ‘overseas aid’ generally, but, for example, combat malaria, or the abuses of children homes, or promote ‘the girl effect’. One reason behind the success of the sector may be its repositioning of its causes, the appeal of these to new audiences, and associated investment in the fundraising and branding. This in turn fits with a more general shift of development: it is less about aid to poor countries, and more a series of interlinked global challenges, as the SDGs demonstrates.

Moreover new charities are more ambitious geographically. Charity Commission records show that in the first five years of the millennium the average number of countries charities worked in overseas was 4.5. Since 2015 it is over 9.3. The turn away from ‘overseas poverty and famine relief’ does not signal a more general insularity.

But it is important to understand what lies behind these trends. The people who set up new charities are some of the most active members, the core if you will, of the ‘civic core’ that underpins the charitable sector in this country. There was something interesting going on in the mid-2000s in the minds of this group: a clear and determined shift from new charities which had a UK-only focus, to more concern for poorer countries (as the graph below shows). We realise this may be a sore point (for reasons clear here and here), but perhaps this was a consequence of ‘Make Poverty History’? After the financial crash that trend reversed, but we still have a situation in which more charities focussed only on DAC listed countries are being established now than at the start of the millennium. Even if no more charities are set up to pursue ‘overseas aid or famine relief’, we contend that there are still hundreds of charities being established that will sustain the development NGO sector in the future and lead it in new directions. As our work proceeds, we look forward to understanding how trends in new organisations shape the sector as a whole.

The changing configuration of international interest in new NGOs

Untitled1

Source: Charity Commission records downloaded June 2018

This blog first appeared here.

Posted in Uncategorized | 2 Comments

MDPI Journals: 2015-2018

! THIS BLOG IS OUT OF DATE. For a more recent analysis see this blog !

Dear MDPI,

Your journal publications have grown dramatically, and quite extraordinarily. But there are sceptics who suggest that this reflects low standards and distorting financial incentives. I was one of them. To prove my views I explored trends in publications of 140 journals for which data were available from 2015 onwards. But doing so proved me wrong; I could not sustain my sceptical view. Instead I think I have a better understanding of why researchers are so keen to publish with you. But my exploration also makes plain challenges that you will face in the future, that are caused by your remarkable success. In this letter I describe your growth, the lack of substance to sceptics’ criticism and the challenges which your success creates. I also describe potential solutions to them. Here is the word version of it.

  1. The Remarkable Growth of MDPI

MDPI’s growth is phenomenal. In barely ten years it has transformed from a small bespoke publisher managing a few journals to a major player publishing over 200 journals and tens of thousands of papers (Figure 1A). Moreover this growth is not due to new titles; it is driven by the popularity of its older journals (Figure 1B). 2018 in particular was a remarkable year for the company in which a major rise in submissions across almost the entire portfolio drove dramatic increases in the number of papers published. That fact is most plainly seen in Figure 1C. This shows that in 2018 the vast majority of the journals saw submissions increase by at least 50%, sometimes much more, resulting in thousands of extra MSes.

This growth reflects a number of unusual features. Processing times are short – in 2018 median time from submission to publication was only 39 days (!) The journal is entirely open access with all costs paid for by Author Processing Charges (APC). APC range from 350 and 2000 CHF. The median value charged per journal is 1000 CHF. The more popular journals charge more, and most published papers in 2018 paid 1700-2000 CHF. But these rates still make the journals both relatively cheap to publish in (for gold standard open access). Furthermore as open access journals the finished products are easily accessible and citable. Growth has been facilitated by the large number of special issues each journal promotes. These are curated by guest editors. It easier to identify speedy reviewers of special issues as they leverage the networks of the guest editors. The journals also have (I am told) an excellent journal management system that is easy to work with. Finally, MDPI are also more transparent compared to other publishers. Their website is easily navigable and presents useful material about all its journals, such as rejection rates, which others do not make so easily available.

These features suggest that MDPI has become an increasingly attractive venue for researchers to publish because of the quality of service it offers. This service means that increasingly large numbers try to place their work in its journals. It publishes more papers because more and more people are sending in papers to review.

But it is also possible that the growth in papers reflects low standards. In some academic quarters there is scepticism as to the quality of MDPI journals. The speed of review and the abundance of papers makes it harder to maintain the highest academic standards. Good science can be slow. Hundreds of special issues, each with their own editors, will make it keep consistent standards across such diversity. Similarly the variety of topics covered within individual journals also makes overseeing the same high standards of review across so many different subjects harder. The more popular journals also have hundreds of editorial board members. Again this raises problems of consistency. Finally a sceptic might also suggest that the APC for published papers introduces an incentive to publish weaker papers. Because any paper, however poor, brings in revenue. I myself held those views before writing this document.

We can examine the sceptic’s case by considering how rejection rates have influenced the growth in publications and by exploring changes in revenue across the portfolio. I present these data below. Methods are described at the end of this document.

Screen Shot 2019-12-04 at 22.21.07

Figure 1- The growth of MDPI journals, papers and submissions (click here to see a pdf version)

  1. Growth is not associated with lowering publication standards

On average MDPI journals are relatively generous with acceptance rates across the portfolio of older journals of just under 50%. However their more popular journals tend to have higher rejection rates. Rates have not changed significantly over the previous four years of growth (Figure 2A). These averages conceal a high range of rejection rates from below 10 to over 80. These are shown in Figure 2B, which show annual rejection rates for each journal against total submissions for that journal in each year.

More important than the headline rejection rates is what happens to these rates as submissions increase. When journals become more popular they have two choices. They can get bigger and publish more papers. Or they can raise their standards and increase their rejection rates. The former increases revenues from APC, the latter can raise reputations and standards. High rejection rates in good journals are generally over 80% with the best journals enjoying rates of over 90 or even 95%.

MDPI journals have generally not sought to become more exclusive as they have become popular. Few journals have rejection rates over 70%. At the same time they have not grown the larger journals by lowering standards and decreasing rejection rates. The most popular journals tend to have acceptance rates which have remained stable and between 50 and 70% even as interest in these journals has mushroomed (Figure 2C & D).

There is a tendency for the smaller journals to have lower rejection rates and for those rejection rates to have declined from 2015-2018 (Table 1). However this is a tendency not a rule. Some smaller journals have high rejection rates which have become more stringent with time (second panels of Figure 2C & D). Editors clearly have freedom to respond as they wish to increasing interest in their journals.

Screen Shot 2019-12-05 at 07.01.06

There is no evidence therefore to support the sceptic’s view that MDPI journals have grown because they have become easier to publish in. The journals with higher rejection rates publish more articles than those with lower rejection rates (Table 2). MDPI journals have become substantially more popular without lowering rejection rates of their largest journals.

Screen Shot 2019-12-05 at 07.01.51

Screen Shot 2019-12-04 at 22.29.29Figure 2- Change in rejection rates. Area of bubbles proportionate to 2018 submission (click here for a pdf of this figure)

  1. Increased popularity brings in more money, but pursuit of APC does not drive publications.

The growth of interest in MDPI journals has presented a considerable commercial opportunity, and the company has made good use of it. The company structures APC such that the more popular journals are more expensive to publish in (Table 3).

Screen Shot 2019-12-05 at 07.02.59

As a result of increasing demand, APC have risen across the portfolio (Table 4) to some consternation in the open access community. When a journal is younger and smaller it is cheaper to publish in. But when it receives more interest, the price rises.

Screen Shot 2019-12-05 at 07.03.39

The value of APC and their importance to the business model of the company raises the prospect that it may be tempting to use journals as a means of raising revenue by accepting dubious papers. Papers may be accepted because of the APC they will generate rather than their scientific merit.

This is not the case. Journals with the lowest rejection rates raise the lowest revenues (Table 5). The value of rejected papers substantially exceeds the revenues of published papers. The commercial benefits of low rejection rates are negligible compared to the returns from journals which are harder to get into. The more exclusive journals are more lucrative for the publishers.

Screen Shot 2019-12-05 at 07.04.59

There is no evidence therefore to support the sceptic’s view that MDPI publishes more because it has a financial incentive to do so. Revenues have grown because charges change as researcher interest grows, not because low standards let in too many papers.

4. Scope, Significance and the Place of MDPI in the Publishing Ecosystem

At first sight is difficult for a sceptic to understand how such an increase in the quantity of publications cannot be associated with low standards. Where could all these publication standard articles have come from so quickly? For sceptics the speed of the review process, the diversity of topics covered and the plethora of special issues and editors must mean that quality is not always be maintained. In 2018, for example, 14 journals I have examined were accepting over 70% of all submissions, leading to just under 2,400 published papers.

But these low rejection rates are relatively uncommon. We cannot explain the growth of MDPI simply be claiming undue haste in the publication process. Another possible answer is that as the journal reputations have grown so they have attracted more interest from better researchers who are keen to get their work into these growing new, open access journals. The guest-editing-special-collection model that MDPI use to bring in content can easily be scaled up. This makes it possible to keep reviewing times short, and make it easier to locate willing reviewers. The journals benefit from the free labour and networks that special issue editors are so keen to provide. In this scenario MDPI is just capturing a greater market share of the publishable research articles. This interpretation has added forced because 63 professional scientific associations had partnered with or affiliated to MDPI journals by the end of 2018.

Another indication of quality – although I have not assembled the data here – is that MDPI journal citation rates are rising, and they are increasingly being listed on the large journal and citation databases. More scientists want to talk about the work that MDPI journals are producing. In 2018 117 journals were covered by the Web of Science Core collection, 54 by SCIE and 111 by Scopus.

The sceptic’s view of MDPI is just not supported by the data I have analysed. Indeed the stance is obtuse because it will not help us to understand the nature of the service offered by these journals or their appeal to the researcher community. MDPI journals have grown because submissions to them have grown. Therefore we have to understand why they have become more popular.

It may be helpful to conceptualise the publication choices of academics as a market place – offering various degrees of accessibility, reputation, prestige and ease of publication. MDPI journals have generally not used the increased submissions they have enjoyed to make themselves really hard to get into. After all, some of the better journals pride themselves on rejection rates of over 90%. This is not MDPI territory. They will not become the most prestigious journals in academia while rejection rates are so generally low.

There is a risk that the high rates and speed of the process increases the risks of journals making mistakes, and thus bringing down the value of publishing in them. But the risk of being associated with mistakes is deterring few people. The rising level of submissions clearly demonstrates that any doubts about quality do not inhibit many researchers from submitting their work to these journals. There is clearly an appetite for journals which are relatively easy to get into, and then more accessible to more people to read. That is why it is a mistake to see the growth and popularity of MDPI journals as a sign of poor quality. This view simply fails to recognise the value that other researchers find in the services MDPI offers.

It is better instead to recognise that MDPI may herald a new approach and philosophy to academic publishing. MDPI journals are not published physically; they only appear in electronic form. When journals were printed space was at a premium – it cost more to publish longer papers. The best papers were those which made significant scientific advances, but by using minimal words, diagrams and space. Progress was concise.

However, without space constraints it is possible for editors to argue that if something is true and valid then it deserves to be published. The electronic-only format means that space is never an issue. They never have to worry about less important papers displacing more important papers. Instead there is room for everyone.

Another way of putting this is that the scope and significance of scientific advances individual papers make need not be criteria for publication. I once wrote a paper that reviewed conservation spending by conservation NGOs across all of sub-Saharan Africa over a three year period, and compared its distribution to conservation priorities. This work was completely original, no one had ever tried to do anything like it before. It could have substantially improved our understanding of where conservation effort was directed. Yet the journal I sent it to rejected it within 24 hours because it did not have adequate ‘scope’. Important as I thought my work to be, there were more important things for time-pressed scientists to be reading.

This approach to scope and significance is most transparently clear in the ethos of a new multidisciplinary journal ‘J’ that MDPI launched in 2018. I have reproduced the full text describing the journal and its hopes below (Box 1). The important text is highlighted in bold. It states that reviewers will not be asked to consider the significance of contributions, merely their validity. The larger community of readers will determine how much any particular findings matter. Scientific progress has no longer to be defined by its combination of insight and concision. The most long-winded and minor advances can still be published. Scientific progress can be made if a paper merely adds insight, without any consideration of space, concision or efficiency.

Box 1: J — Multidisciplinary Scientific Journal (emphasis added)

J (ISSN 2571-8800) is an international, peer-reviewed, multidisciplinary, open access journal, which provides an advanced forum for high-quality original research across the entire range of natural, applied, social, and formal sciences. J is dedicated to publishing all types of research outputs, including negative and confirmatory results in all disciplines and to make these results available to the relevant scientific communities shortly after peer-review. Our goal is to improve fast dissemination of new research results and ideas, and to allow research groups to build new studies, innovations and knowledge without delay . . . . Reviewers will be asked to evaluate only the soundness of the research approach, to detect the presence of any major flaws, and to make a recommendation regarding publication. They can choose between acceptance, minor revision, major revision or rejection, but only evaluating the validity of the scientific content. The significance of the contributions will be assessed by the appropriate scientific community at large, not by the two reviewers or the academic editor. We encourage scientists to publish their experimental and theoretical results in as much detail as possible so there is no restriction on the length of the papers.

The arrival of J means that there is now an outlet that can publish any sort of work that derives from any combination of disciplines, at any length, about any subject at all. If it takes off, J could become a rather large and all-inclusive journal. It is possible that its rejection rates could be low. Or put differently, rejection rates are almost irrelevant as a measure of success. They are more likely to measure the clarity of the journal’s instructions to authors. Success would be measured post-hoc, by the authority and use that publications acquire.

  1. Remaining Challenges for MDPI journals

The reputation of MDPI journals is bound to grow. The mud that sceptics throw will not stick. After all, I have tried harder than most (just try recreating the figures and tables I made above) and, as a result, have failed all the more spectacularly. Occasional mistakes notwithstanding, these journals will be publishing too much interesting science for their content to be dismissed. Anyway there are ample cases of much more stringent journals publishing mistakes, and/or rejecting content that later formed the basis of Nobel prizes. MDPI is no different from the rest of the field.

 The same logic applies commercially. With current rejection rates as they are, the MDPI model is relatively inured from mistakes and damage to the brand. It is still possible for mistakes to be made. But journals that are meant to be the best still drop absolute clangers, or miss amazing work. For MDPI a poor paper, published inappropriately, makes no difference to income. No one will unsubscribe from a free journal. The revenue from it is already earned and the deficiencies of any individual article are diluted by the sheer volume of other material. Bad work is buried in the mountain of other publications.

A died-in-the-wool sceptic may suggest that lowered standards, or the perceived risk of lowered standards, may provoke an institutional responses from Universities. If promotion or appointment committees are unimpressed by MDPI publications in a CV then this will dampen some of the enthusiasm. But it is difficult to see how this could make any dent in the vast number of publications which MDPI journals receive. Many academics are enthusiastic publishers. They will try for both more exclusive publications and for papers which are more accessible and can reach larger audiences. The trend is for journals to proliferate. Very high rejection rates mean, by their very nature, that most academics do not succeed in reaching those journals most of the time. They have to publish somewhere and MDPI journals provide a good outlet to do so.

 There are, however, two central challenges that will become increasingly apparent, especially now that MDPI journals have become more popular and more mainstream. These are

  1. Creating ever more content, in which scope and significance are not counted, and paper length is not controlled, creates unsupportable reading demands on researchers. MDPI may be able to publish more and more work. But we will not be able to read it.
  2. The increased financial returns creates increasing expectations to give back to the researchers producing these papers.

Each challenge presents obvious solutions. The response to the first is to create more exclusive journals. MDPI’s reputation is of a generalist publisher with relatively low rejection rates and as a relatively easy place to get material out quickly. It could create more exclusive outlets which are cheaper to publish in (and hence more popular with respect to submissions) but where rejection rates are much higher. This would mirror the trend for more exclusive brands (Nature) to establish less exclusive versions of an exclusive brand.

It is easy to image a system whereby authors, or editors, who thought submissions had the scope and significance to merit broader readership could pay a small supplemental fee (50 CHF) for their MS to be considered in the more exclusive journals. Papers which were accepted would be waived all APC. This arrangement would be likely to result in a much more exclusive journal, full of more significant papers and with likely high rejection rates.

So, MDPI, why not set up a ‘flagship’ journal (you could even call it that) in which the best papers of your stable are co-published in the flagship. This would be a more prestigious journal than your others, and particularly appealing because it would be free to publish in.

Another means of helping researchers to cope with excessive content is to produce bespoke review papers. The gold standard are provided by the Cochrane reviews, which consider both published and unpublished work, recognising that important negative results may not be publishable by normal standards. However MDPI changes the standards, it publishes negative and confirmatory results. But this makes it more important to produce reviews and digests which summarise ever more voluminous material.

The response to the second is to ‘give back’ more. The headline estimated figures of MDPI’s growth in recent years are rather eye-watering (Figure 1). Few commercial operations are able to demonstrate those levels of growth. They are a testimony to remarkable success.

These figures are also only estimates. They do not include 51 more recently created (and smaller) journals. But they also assume that all APC are paid in full. This is not the case. APC are often waived or reduced. A former CEO of MDPI recently stated that 20% of APC are not paid (see the comments on this thread). Good articles, editors, reviewers and members of affiliated professional societies all receive APC rebates. Moreover these are only estimated gross, not net, revenues. MDPI has also had to establish extensive physical and commercial infrastructure in order to service its journals. It’s website and journal management system are complemented by three offices (in Switzerland, China and Spain) and, as of 2018, 1248 employees. Nevertheless the success of the model, and the ability of these journals to scale up publications without adding overheads and costs means that this is a system which is geared towards profitability.

Screen Shot 2019-12-04 at 22.35.33

Figure 3: Estimated Gross Revenues – APC of accepted papers 2015-2018 from 140 journals.

Publishers generally suffer from popular perceptions that they are exploitative and free-ride on academics labour. They do free-ride, but can counter-act this by supporting more academic activities. Many of the activities publishers support (workshops, conferences, awards, sponsorship to attend conferences etc) then tend to generate more content for their journals.

MDPI is active in this space. To give just one example the APC rebates it offers to reviewers, could, theoretically, have reached 28 million CHF from just the 140 journals I have examined. But MDPI is not as transparent about its giving as it could be. Given that its transparency on other aspects of its activity already raises the bar for other publishers it could be similarly forthcoming with respect to its sponsorship of academic activity. That could drive welcome change across the sector.

Another possibility would be capping or reducing the APC charged for the most popular journals. A similar effect could be achieved by rewarding authors with rebates on the APC on future submissions. The size of these rebates could be linked to the journal’s performance in the year of publication – and thus authors share in the success that their writing helps to create.

Again, MDPI, there are concrete suggestions here to which I would like you to respond. Please could you publish what APC revenues you earned from each journal in each year – and what APC subsidy you have given out to authors and reviewers. Please make clear what other forms of support you are giving. I think if you lead on this then other publishers will have to follow. And, while you are at it, please could you make your data downloadable – it took me a while to copy them all out. Finally, please offer authors an APC rebate on future submissions that is linked to the success of the journal in which they have published – share the love!

  1. Methods and Acknowledgements

I copied data for submissions and rejection rates for 140 journals from the MDPI website that began business on or before 2015 and which handled more than 10 papers in that year. I assumed that rejection rates in a given year applied to all papers submitted in that year. Rounding of some of the rejection rates means that minor discrepancies appear in the accepted papers in this spreadsheet and the published data.

I took APC charges from the website but have had to estimate likely charges in some instances where APC for early years were unavailable. In 2018 and 2019 APC changed every 6 months. I have therefore calculated average APC for the year and applied that average to all papers published in that year. This is inaccurate as the lower fees apply to papers published in the first half the year, and higher fees in the second half. Nevertheless it provides a satisfactory short hand in the absence of better data.

The raw data I have used are available here.

My thanks to Clive Phillips for constructive engagement, key insights and critical commentary throughout.

Your Faithfully,

Dan Brockington,
Glossop, UK,
December 2019

 

Posted in Uncategorized | Tagged , , | 10 Comments

Exploring the UK’s NGO sector

By Chris Jordan, Communications and Impact Manage. This Blog first appeared at the GDI site here: http://blog.gdi.manchester.ac.uk/exploring-the-uks-ngo-sector/ 

[Visit the NGO explorer site here]

After setting out to better understand the UK’s development NGO sector, Dan Brockington and Nicola Banks soon realised that pulling together the data on thousands of charities would be a huge challenge.

After heroic amounts of filtering and crunching data, their research produced some fascinating insights into the health of the sector as a whole. While working closely with a range of charities, the researchers soon realised there was a real desire (particularly on the part of smaller NGOs and those based outside of London) to access similar information and connect with their counterparts.

While the funding environment strongly encourages (and often necessitates) collaboration across charities, there is very little support in place to facilitate that.

To help plug this gap, Dan and Nicola began working on a simple, accessible website that would allow people to search and interrogate publically available data on the vast number of UK charities working overseas. They linked up with open data enthusiasts who have already been active in this area, including Dan Kwiatkowski and David Kane, as well as like-minded colleagues in the sector in Bond (particularly Sarah Johns), the FSI and the Small International Development Charities Network Facebook group.

A combination of their enthusiasm, experience and expertise meant after only a few months of development, the NGO Explorer was unveiled at BOND and quickly put through its paces by a range of small charities.

Using the NGO Explorer to search for a country such as Malawi and you’ll find the details of 802 UK-based NGOs working in the country. You can quickly see what sectors they work in, where else they work and who they work with. The dashboard links to IATI data on projects in the country, as well as a host of other online resources. A list of all the charities and their contact details can be seen and all the data can be downloaded directly from the site. And, in what is perhaps the best feature, the site permits searches of key words used in NGOs’ own descriptions of themselves which provides a much more informed way of searching for specific activities and interests.

The same search can also provide fascinating insights into patterns within the UK. A search for NGOs in Sheffield brings up 100 different organisations, who work in 146 different countries, spending a total of £175 million a year. You can drill down into the size of these organisations by income and expenditure, with links off to a host of more detailed information.  All searches can be refined by size, activities and locations for a clearer view.

As Dan Brockington points out “providing onward links to other similar sites and networking devices is essential because a site like this can only really work if it fits into a broader ecosystem of networking activities and organisations. After all, it is only a website. To be a success it will have to be used as a spring-board for contacts, interactions and exchange.”

So far, the feedback suggests it has been playing this role. The response from contacts in DfID was “WOW!!!!” – and yes, that was four exclamation marks. Pauline Broomhead, CEO at the Foundation for Social Improvement remarked, “Whatever thoughts or ideas you’re having, you can go off and find different things. It’s a tool you can play with, which really assists with networking. In a sector that’s increasingly stretched, greater collaboration has to be the answer.”

Vic Hancock Fell, director of Raising Futures Kenya echoed these thoughts: “The future of the sector is through collaboration: we can’t exist in isolation. I’ll be using the NGO explorer to find out the other charities where I’m based in Sheffield who also work in Kenya and using it to connect with people who are doing similar work.”

The site is based on Charity Commission data for England and Wales, made accessible and kept updated via Charity Base (created by Dan Kwiatkowski). Finding ways to include Scottish and Irish NGOs in the tool is the next priority.

We’re fascinated to hear if people find the site useful and how it could be improved – so please let us know!

The NGO Explorer project was funded by the ESRC and the Global Challenges Research Fund and supported by the Universities of Sheffield and Manchester.

 

Note: This article gives the views of the author/academic featured and does not represent the views of the Global Development Institute as a whole.
Posted in Uncategorized | Leave a comment

Half-Earth is Half-Hearted. Make way for Thanos and The Half-Universe

Megademophobia – fear of overpopulation – has a lot to answer for. Malthus is serious enough, Ehrlich almost as bad. But now there is new problem: Marvel’s Infinity Wars. This film spectacularly unites multiple comic book heroes in an orgy of destruction which culminates in the annihilation of precisely half of the people in the Universe (including many of the weirdly abled protagonists). And all this because the Bad-guy-in-chief, Thanos, fears that overpopulation causes too much suffering.

But is this the first time this idea has been circulated? A few years ago, EO Wilson, one of the most prominent biologists of the 20th century, and an evangelist for conservation, came up with the idea of Half-Earth: the notion that half of earth was to be set aside for nature. This is based on the principle that humans are not part of nature, and that for the Earth to be able to preserve its biodiversity and sustain itself, humans had to be removed from one entire half of it.

Despite the similarity of these ideas, we don’t think that the Marvel script writers are half-earthers, or that Wilson is a secret Avengers fan (unless of course of Antman). The filmmakers and conservationists are no doubt independent, original thinkers. No one has copied anyone else.

But our views are not so important here. We understand that an irate Half-Earther has been pressing the Half-Earth movement to sue Marvel comics for plagiarism. This is, of course, absurd. First, there is no plagiarism. Second, never pick a fight with Thanos. So we would like the Half-Earth movement to distance itself from such foolishness, and release the following statement:

The Half-Earth movement would like publicly to disassociate itself from any resemblance or comparison with the Marvel film Infinity Wars. Thanos’ evil plot to destroy half of all human life, and our own cunning plan, have nothing in common.  They share no affinity. Any sane observer could tell that one of these plans is a complete fantasy. It is socially (not to say politically) illiterate. It disregards the lived fabric of our lives and the role of people in creating life around us. It is only possible to conceive with new advances in computer modeling. No prizes for guessing which one that is.

Just to underline the point further we would like to point out a few of the balmier elements of Thanos’ plan compared to our own. Thanos vapourised a random half of the universe. Our plan will deliberately target the places where poorer people live. It is cheaper to move them. It is politically safer to leave the rich alone. Thanos only appeared to have human interests at heart when he abolished half of life. Our plan is not about people. It will make life better for the richer half of humanity, but, more importantly, it will also make sure that these people can continue to enjoy the best ecosystem services and the biodiversity that their wealth deserves. Also, Thanos was utterly uncompromising. We are vague and ambiguous when it comes to what sort of life will be possible in the half where people are not around.

So there is no similarity whatsoever between Thanos’ evil plan for domination and our own hopes to clear the land of people who get in the way of our understanding of nature.

To avoid all possibility of doubt we would also like to point out that none of us cheered at the end of Infinity Wars. We have not we named any of our offspring or pets ‘Thanos’ in appreciation of his achievements. Nor do we gain any succour from news that an intergalactic message has been intercepted from one Zaphod Beeblebrox to Thanos indicating that to, due to a clerical error, the wrong half was destroyed and would he accept a further 25% reduction to life in the Universe, at discounted rates?

We hope that this press release will erase all possibility of confusion and allow the rest of us to get on with the business that matters – halving all known cases of megademophobia.

For a sensible rendition of these arguments see:

 

This blog is written by Kartel Shockington is a failed comic book creation with special powers of rapid hair loss. He sometimes appears as Kartik Shanker, and at other times as Dan Brockington. It first appeared here.

Kartik Shanker is at Indian Institute of Science & Dakshin Foundation, Bangalore, India. Email: kshanker@gmail.com

Daniel Brockington is at the University of Sheffield, UK. Email: d.brockington@sheffield.ac.uk.

Illustrator: Amit Kaikini

Posted in Uncategorized | Leave a comment