ABSTRACT
According to the English definition, “efficiency” is the state or the quality of being able to accomplish something with the least waste of time and effort. Concerning the Journals, “efficiency” signifies providing the submitting authors with a peer-review decision with a least loss of time and academic value. The “efficiency”, on the journal’s part, also means least delays in academic returns that submitting authors deserve from their own work. The “efficiency”, on the journal’s part, also implies least delays in making available the access to possible benefits to the patients or public from the author’s work. In other words, efficiency is a measurable ability of the journals, whether paid or unpaid, to do their “duties well”, “efficiently”, “successfully”, and “without waste and avoidable loss” to the submitting authors.
It is our vision to make the entire publication process coherent and convenient. At the same time, it is also our vision to guard the rights of submitting authors in having a time-bound, convenient, and efficient service with high customer service values from their service providers, i.e. the journals, whether paid or unpaid. For this, we introduce “Bhalla-Cleenewerck Journal Efficiency Factor (BC-JEF©)”, named in short as JEF©, as a parameter for assessing the functional efficiency of the journals.
We introduce JEF©, an inventive non-profit measure to ensure the “greater good” of all concerned. For the journals, JEF© would help them recognize their duties and obligations for providing an efficient publication service to the authors. Also, JEF© would facilitate the journals in making their publication process more fulfilling and coherent, particularly for the authors, based on whom they thrive. JEF© would also help the journals in their healthy commercial competition. For the authors, JEF© would help them make an informed choice while submitting their work to a journal. For other agencies, JEF© provides them with an alternative metric to track parameters that are not being covered by any of the current existing journal metrics.
INTRODUCTION
According to the English definition, “efficiency” is the state or the quality of being able to accomplish something with the least waste of time and effort. Concerning the Journals, “efficiency” signifies providing the submitting authors with a peer-review decision with a least loss of time and academic value. The “efficiency”, on the journal’s part, also means least delays in academic returns that submitting authors deserve from their own work. The “efficiency”, on the journal’s part, also implies least delays in making available the access to possible benefits to the patients or public from the author’s work.
In other words, efficiency is a measurable ability of the journals, whether paid or unpaid, to do their “duties well”, “efficiently”, “successfully”, and “without waste and avoidable loss” to the submitting authors. The efficiency of the journals is also critical because the efficient journal would be more likely to ensure timely disbursal of new scientific information, and preventive and therapeutic solutions for use by the health agencies, or the patient and public at large. Here, the efficiency should not be confused with the effectiveness. Similarly, the argument of supposed rigour in a peer-review process, which often is not there,1 should also be not confused with efficiency, which is “doing things right in a time-bound convenient manner with high customer service values and without personal biases”.
It is our vision to make the entire publication process coherent and convenient. At the same time, it is also our vision to guard the rights of submitting authors in having a time-bound, convenient, and efficient service with high customer service values from their service providers, i.e. the journals, whether paid or unpaid. For this, we introduce “Bhalla-Cleenewerck Journal Efficiency Factor (BC-JEF©)”, named in short as JEF©, as a parameter for assessing the functional efficiency of the journals.
WHY DO WE NEED TO SPEAK ABOUT EFFICIENCY?
At present, there are no reliable tests about the quality and operational efficiency of the journals, which is unfortunate since the “state of medical joumals is terrible” and quality remains a major issue.1-2 There are perhaps 16,000-40,000 medical journals in the world, both older and newer,2 and both kinds struggle often in providing their authors an adequate quality and efficiency during peer-review process. This journal inefficiency may reflect solely upon the authors, in return, in mental, psychosomatic, and academic suffering. We even suspect that journal inefficiency is one of the strong risk factors for poor mental health of the submitting authors.
There are numerous field examples. For instance, a manuscript submitted in North Africa took one and half years to get reviewed negatively, and about eight months for a reviewed manuscript. A manuscript submitted to a journal in South America, and East and North Europe took nearly seven months to provide a review. Also, the issues with the journal efficiency and quality has been so concerning that manuscripts are rejected even after initial acceptance and favorable review comments and repeated rounds of revision. The journals in India have been seen to brutally change their entire submission system during the ongoing peer-review process, without prior information. A particular mental health journal from India prioritize personal opinions over professional coherence. In another example, a manuscript on epilepsy took eight months for no single peer-review. There are numerous other examples that pertain to the quality and inefficiency of the journals. So, where may we draw the line for the journals in their duties and obligations towards their authors, based on whom they thrive?
Few common reasons behind journal inefficiency includes:
- Inadequate prior arrangement for the peer-reviewers who are adequately committed, professional, and available.
- Addition of individuals as peer-reviewers to merely fill a space.
- Lack of formal training and certification of peer-reviewers.
- Lack of any penalty system for the journals.
- Inadequate individuals who are fair, active, careful, helpful, friendly, thoughtful in their reviewer’s role.
- No formal definition of peer-reviewers.
- No definition about the purpose of peer-review, i.e., to provide constructive comments irrespective of personal bias or be a “judge of the authors”.
Many of these issues are now slowly being solved innovately by us. For instance, we are launching an entirely novel and first-ever teaching module for the peer-reviewers© to raise a new class of “trained and certified” peer-reviewers, with certain pre-defined personal and professional quality parameters.
The extent of unprofessional behaviour can be learned from two simple examples. As an example, “I sense I’m not getting paid enough for the time I spend for each patient, and I’m not happy with my income. That’s why I don’t spend much time.”(Physician No. 6).
“The truth is that educating patients takes time, and considering the low visit fees, doctors don’t do it. I mean a practitioner can’t spend 10-15 minutes talking to a patient when the visit fee is so low. That’s why they don’t”(Physician No. 2).
So, in the light of such an environment of unprofessional behaviour, can we reliably assume that individuals would be spending any time, mind, or effort during a peer-review process, especially for an activity with no monetary benefits? Then, can we not imagine that inefficient journals and individuals (peer-reviewers) would be a loss to timely scientific progress and disbursal of vital scientific work being done by the authors worldover?
HOW DOES JEF© DIFFERS FROM THE IMPACT FACTOR AND CITE SCORE?
Both Impact Factor and Cite Score metrics are commercial metrics, which are run by profit-making agencies. Moreover, their calculation methods are not open and transparent. Both parameters are possibly an attempt to lure authors in a dishonest way. In contrast, JEF© is the only journal metric that is open and transparent, uses a structured questionnaire, and operates under the purview of the UN treaty authority, which makes JEF©a very reliable, unique, and a trust-worthy metric.
Table 1: The parameters for estimating BC-JEF
CRITERIA | PARAMETER | JEF© |
Level of incovenience for submission | Email submission | 3 |
Submission with minimal data entry | 2 | |
Submission with considerable data entry | 1 | |
Submission with enormous data entry and/or with reqt. of allied documents and/or broad copy-editing or formatting | 0 | |
Time to acknowledge the submission | Same day | 3 |
First two days | 2 | |
First week | 1 | |
Never | 0 | |
Editorial in-house review | First one week | 3 |
First two weeks | 2 | |
First three weeks | 1 | |
First four weeks or never | 0 | |
Time to selection of peer-reviewers | Subsequent one week | 3 |
Subsequent two weeks | 2 | |
Subsequent three weeks | 1 | |
Subsequent four weeks or later | 0 | |
Time to primary peer-review | Subsequent one week | 3 |
Subsequent two weeks | 2 | |
Subsequent three weeks | 1 | |
Subsequent four weeks or later | 0 | |
Time to decision after peer-review | Subsequent one week | 3 |
Subsequent two weeks | 2 | |
Subsequent three weeks | 1 | |
Subsequent four weeks or later | 0 | |
Time to publish after accept | Subsequent one week, or, AOP | 3 |
Subsequent two weeks | 2 | |
Subsequent three weeks | 1 | |
Subsequent four weeks or later | 0 | |
Footnotes: Alied documents include ones that are non-essential at peer-review stage, such as publishing contracts, hand-signatures of all authors, Ethics Board letters. The duration is in calendar days. AOP is the Ahead of Print or online publication of title, or, abstract, or, full manuscript. |
Besides, both Impact Factor and CiteScores have other issues. For instance, Cite Score includes all document types indexed by Scopus alone, and not all journals are indexed in Scopus.
Moreover, there are journals (e.g., the WHO’s Revista Panamericana de Salud Pública) that publish “flagship scientific and technical publication for disseminating information of international significance to strengthen national and local health systems within the continent” since “1922”. However, these qualities are not reflected in their Impact Factor. For example, the Impact Factor of this journal is merely 0.53. Besides, there is no standard agreement or evidence-based empirical data that may help to understand the “good” or the “Gold-standard” Impact Factor or Cite Score. Both Impact Factor and CiteScore are based on citation numbers. Others have shown that there would always be a small group of core journals that would receive the substantial citations in a subject, leaving aside most of the other journals.3
JEF© can be quantitatively determined, table 1. The estimation can be organized at two separate levels, firstly, for the first submitted version, and secondly, for the revised submitted version. For now, we address the first scenario alone, since the first peer-review of a manuscript is the most troubling in terms of efficiency. The scores are in the Likert Scale in a range of zero to three, with zero as the minimum efficiency and three as the maximum efficiency. Therefore, JEF©of a Journal may range between zero and 21.
CONCLUSION
To conclude, we introduce JEF©, an inventive non-profit measure to ensure the “greater good” of all concerned. For the journals, JEF© would help them recognize their duties and obligations for providing an efficient publication service to the authors. Also, JEF© would facilitate the journals in making their publication process more fulfilling and coherent, particularly for the authors, based on whom they thrive. JEF© would also help the journals in their healthy commercial competition. For the authors, JEF© would help them make an informed choice while submitting their work to a journal. For other agencies, JEF© provides them with an alternative metric to track parameters that are not being covered by any of the current existing journal metrics.
SUPPLEMENT
Conflicts of Interest: DB receives research funds and attends and gives a talk in meetings that may have been funded directly or indirectly by commercial entities. All other authors have no conflicts of interest.
Acknowledgment/Funding: Not applicable.
REFERENCES
- Rennie D. The present state of medical journals. The Lancet. 1998;352:18-22.
- Dickersin K, Scherer R, Lefebvre C. Identifying relevant studies for systematic reviews. In: Chalmers I, Altman D, editors. Systematatic reviews. London: BMJ Publishing Company; 1997. p. 17-36.
- Garfield G. Bradford’s law and related statistical patterns. Essays of an Information Scientist. 1980;4:476-83.