Next Article in Journal
Diamond Open Access in Norway 2017–2020
Previous Article in Journal
Scientific Stylisation or the ‘Democracy Dilemma’ of Graphical Abstracts
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Advancing Self-Evaluative and Self-Regulatory Mechanisms of Scholarly Journals: Editors’ Perspectives on What Needs to Be Improved in the Editorial Process

Department of Law, Faculty of Management, University of Primorska, 6000 Koper, Slovenia
Submission received: 31 December 2021 / Revised: 1 March 2022 / Accepted: 16 March 2022 / Published: 17 March 2022

Abstract

:
Meticulous self-evaluative practices in the offices of academic periodicals can be helpful in reducing widespread uncertainty about the quality of scholarly journals. This paper summarizes the results of the second part of a qualitative worldwide study among 258 senior editors of scholarly journals across disciplines. By means of a qualitative questionnaire, the survey investigated respondents’ perceptions of needed changes in their own editorial workflow that could, according to their beliefs, positively affect the quality of their journals. The results show that the most relevant past improvements indicated by respondents were achieved by: (a) raising the required quality criteria for manuscripts, by defining standards for desk rejection and/or shaping the desired qualities of the published material, and (b) guaranteeing a rigorous peer review process. Respondents believed that, currently, three areas have the most pressing need for amendment: ensuring higher overall quality of published articles (26% of respondents qualified this need as very high or high), increasing the overall quality of peer-review reports (23%), and raising reviewers’ awareness of the required quality standards (20%). Bivariate analysis shows that respondents who work with non-commercial publishers reported an overall greater need to improve implemented quality assessment processes. Work overload, inadequate reward systems, and a lack of time for development activities were cited by respondents as the greatest obstacles to implementing necessary amendments.

1. Introduction

In modern science, the most important scientific knowledge and research findings are disseminated in the form of articles in scholarly journals. Academic articles are usually the highest ranked works on the scale of academic excellence.
However, the modern scholarly periodical market is characterized by inadequately regulated relationships between authors (scientists, researchers), publishers, and users (readers, libraries), as well as highly non-transparent business models. The last two decades were marked by the rise of predatory publishers [1,2,3,4,5,6,7], which, by offering almost unlimited publication space coupled with little or no quality assurance protocols, have taken advantage of the academic “publish or perish” paradigm [8]. This was possible due to the emergence of alternative journal publication models (open, hybrid), which addressed another pressing problem in science: the significant increase in the price of subscriptions to scholarly journals in print and electronic versions [9,10,11]. Research libraries were forced to reduce the volume of subscriptions [12,13], which urged stakeholders in academic journal publishing to re-evaluate and try to balance the interests of scientists/researchers, publishers, libraries, and the general public.
While the open science movement across the globe was gaining momentum, it provided a forum for a wider discussion about other pressing problems of the scholarly communication system [14,15,16]: the need for transparency and reproducibility in research reports (which also led to the idea of open data), the development of new licensing models (e.g., Creative Commons licenses) that addressed the restrictions from copyright [16], and the emergence of new models for research assessment (to promote the transition to open science) [10]. The academic community called for a fundamental worldwide reform [15,16], which is still gaining political endorsement.
As a result of the above-described epochal changes, there is widespread uncertainty about the quality of journals. Editors are perceived as “gatekeepers” of journal quality [17,18,19] and accept their own role as the “ultimate decision-makers” [20]. Even though editors’ responsibility for publishing high-quality materials remains the same and has been widely defined and elaborated [21,22,23,24,25], their role has been significantly shaped by systemic challenges, and is still transforming.
The editorial decision-making process is influenced by the interplay of a complex web of factors. Scientific editors [26] were found to conduct their quality management duties in different ways, ranging from the mechanical processing of articles and reviews, which gives the decision-making power to reviewers, to a thoughtful and critical engagement with content. This is possible since stakeholders’ roles and tasks within the quality assessment processes differ significantly according to each journal’s unique context and characteristics [19,20]. Even if there is a broad agreement on expected tasks related to the assessment of the scientific aspects of manuscripts by a particular stakeholder, there are different expectations of the level of depth of the performed activities [20].
The status of peer review as a scientific gold standard for quality assessment is broadly accepted and well researched [27,28,29], but it is not solely peer-reviewed reports that shape final decisions about manuscript publication. Among other important factors are editors’ expert knowledge and ability to assess different aspects of manuscripts [20], authors’ replies, and opinions of the editorial office staff or board members.
Studies show that there are significant challenges in editorial peer review management, especially due to a constant increase in the number of manuscripts, fueled by quantitative-oriented work performance assessments by employers in scientific fields [27]. This has been recognized to overburden editors, their teams, and reviewers [30,31], and to result in a high proportion of “desk” or initial rejections of manuscripts [32], delays or a decline in quality of peer-reviewed reports [20,33], a lack of available reviewers [34], and inadequate resources to motivate and compensate reviewers [35,36] and editors [37]. Day-to-day management of such challenges may result in editors omitting development tasks, such as supporting novice authors [19], managing training activities, and advancing the journal’s visibility and impact.
Innovations, especially in the peer review process [38,39,40,41], and technological advancements (e.g., automatization and use of AI) [42] have the potential to alleviate the editor’s burden to some extent [29] while demanding a higher level of specialization, acquisition of new competencies, and a command over technological advances within editorial offices [43]. Such challenges need to be addressed by journal editors, as the people in charge of the quality assessment process. Their knowledge, skills, perceptions, and beliefs shape the quality of journal content and can help to reduce mistrust and doubt in scientific quality assessment.
In the last decade, there has been little research on the required set of core competencies, knowledge, and skills for successful performance of the editor’s tasks in a contemporary academic environment [44,45,46]. Some research has been conducted on the editor’s role as a manager of the peer review process [17,26,47], but significantly less academic attention has been dedicated to analyzing editors’ beliefs and perceptions regarding their role as quality managers [20,48] or developers [19]. Even less is known about editors’ perceptions of required amendments of quality assurance protocols within editorial offices and factors that make editorial “gatekeeping” and “development” work more or less demanding.
A literature review (a concise analysis can be found elsewhere [49]) indicates two dominant ways of obtaining knowledge in the field. Researchers have approached the subject either by studying data from actual reviews of manuscripts submitted to the particular journal(s) and editors’ responses to such review reports [17,26,47], or by researching editors’ experience and beliefs related to their tasks, roles, management of the manuscript processing, and editorial decision-making [19,20,48]. While the methodological design of the first group of studies significantly limits the conclusions that researchers can draw about editors’ beliefs and opinions, the second group of studies are focused on the editors of journals in particular scientific fields (sociology and criminal justice journals [48], biomedical journals [20] and higher education journals [19]), which affects the generalizability of the results.
Aiming to bridge this gap, I conducted a study across several scientific fields. To acquire a novel perspective, I focused on researching the beliefs and opinions of senior editors of scholarly journals worldwide regarding the editorial management of quality assessment processes. The validity of results was increased by a combination of data collection methods. I first used a qualitative questionnaire on a larger sample of editors, followed by qualitative interviews with a smaller group of respondents who were willing to participate. The reliability of the study was further enhanced by using bivariate analysis to detect associations, which were explored during follow-up interviews with participating editors.
The first part of the study [49] was focused on the criteria (originality, validity, significance) that editors use when assessing the quality of manuscripts. Key findings show that across disciplines, editors perceived manuscripts as excellent if they were innovative, scientifically sound, well-written and well-argued, addressed a significant topic, contained useful results, and had the potential to change (improve) something. Originality emerged as the leading quality criterion in manuscript quality assessment, followed by validity and significance. Results also indicated that factors influencing the overall complexity of editorial quality assessment were: (i) a clearly defined minimum threshold of required quality, (ii) consistency of assessment between individual quality criteria, and (iii) experience of the editor.

2. Materials and Methods

By means of a qualitative questionnaire, the second part of the survey investigated the perceptions of respondents regarding needed changes in their own editorial workflow that would, according to their beliefs, positively affect the quality of journals. Respondents were asked about their achievements, i.e., the most relevant and effective improvements that they implemented in the past to enhance the overall quality of their journals. Further, the survey investigated respondents’ perceptions of changes that should be implemented in the future and inquired how critical the need for a particular improvement was in their estimation. The survey also focused on respondents’ perceptions of anticipated obstacles against relevant changes to enhance journal quality. The methods are described below. As this is the second part of the survey, a detailed account can be found elsewhere [49].

2.1. Sampling

Based on a purposeful (criterion) sampling approach (with predetermined criteria of importance) [50], I selected experienced editors who oversaw journals listed in the databases of two professional organizations: the Committee on Publication Ethics (COPE) and the Directory of Open Access Journals (DOAJ). The range of variation was narrowed to members of these two professional organizations, since they are assumed to be familiar with developments in the field of quality standards, quality assessment processes, and ethics in scholarly journal publishing. In August 2015, 1000 journals were randomly sampled from the online databases of members of each organization. There were 1992 editors of scholarly journals in the final sample (eight editors were randomly sampled from both databases), who were approached via the journal’s official e-mail and provided with information about the aims, procedures, and confidentiality measures of the study. The analysis of the randomly sampled editors from both databases showed that there were approximately one-fifth female editors in the final sample. This corresponds with the well documented gender gap in top leadership positions in science [51]. Overall and across disciplines, the percentage of female editors varies from 5% (Robotics AI) to 35% (Aging Neuroscience) [52]. Studies in specific disciplines show that the percentages of women acting as editors-in-chief of prominent journals were: 19% in dermatology journals [53], 21% in medical journals [54], and 16% in environmental biology and natural resource management [55].

2.2. Data Collection

Two invitation e-mails were sent, one on 18 August 2015 and a reminder on 2 September 2015. Out of 2000 invited editors, 258 (13%) responded by starting to fill out the survey. The online survey was active between 6 August and 24 September 2015. The questionnaire, which consisted of 35 questions (Supplementary File S1), was based on an initial review of the literature and further refined after an informal pilot testing. In case respondents were editors of multiple journals, they were asked to respond from the viewpoint of the highest ranked journal that they oversee. Further, if respondents held different editorial positions (e.g., editor-in-chief, managing editor, associate editor, assistant editor, etc.) at multiple journals, they were asked to answer the questions from the viewpoint of the highest position.

2.3. Data Analysis

By means of content analysis, I analyzed respondents’ textual answers to open questions. In order to gain a more complete understanding of the topic, the thematic analysis was conducted in two steps and entailed both deductive and inductive elements. The first step included developing a separate preliminary coding framework for each open-ended question (see Tables 2 and 10). This was based on an analysis of existing research results and theory, which enabled me to delimit the area of research. In the second step, I used inductive coding [56] and read and reread the text and developed codes by employing phrases used by the respondents. I updated and revised the codebook continuously until no additional information was gathered from repeated coding. Data gathered in close-ended multiple choice questions were analyzed using descriptive statistics and bivariate analysis. As the study was aimed at providing descriptions of journal editors’ perspectives and experiences, descriptive statistics were the primary data analysis tool. Bivariate analysis was used to explore whether there were significant relationships between respondents’ estimation of the need to change a particular editorial process and seven independent variables, including editor characteristics such as age, years of experience, gender, etc.

3. Results

3.1. Respondent Characteristics

Respondents (n = 258; Table 1) were from 42 countries; a vast majority of them (81%, n = 209) were male and experienced senior scientists (87%, n = 221, more than 10 years of working experience in science). Among the journals that participants oversaw, 53% (n = 129) were in STEMM fields (natural sciences, engineering and technology, medical and health sciences, agricultural sciences, and mathematics), 31% (n = 76) were in social sciences, and 9% (n = 21) were in the humanities (together, SSH). More than half of editors, at 53% (n = 129), reported that they used double-blind peer review, while 37% (n = 91) used single-blind review. Among the journals, 36% (n = 88) were published by the three biggest commercial publishers in 2016 (Reed Elsevier, Springer, and Wiley Blackwell), 23.5% (n = 57) by other commercial publishers, and 39% (n = 95) by non-commercial/non-profit publishers (learned societies, academic organizations, universities, research centers). Among the journals, 70% (n = 170) did not charge authors for article processing. Journals used a variety of distribution models: 52% (n = 127) were pure open access and 24% (n = 58) were closed access, while others used different hybrid options: partial OA (5%, n = 1), retrospective/delayed OA (2%, n = 5), or open choice (13%, n = 32). Among the respondents, 17% (n = 42) were not certain whether their journal was listed in the Journal Citation Report (an annual publication by Clarivate Analytics, previously the intellectual property of Thomson Reuters) or if it had an impact factor. In their remarks, five respondents explained that their unawareness of the journal’s indexing status was predominantly due to the fact that the marketing (and indexing) activities were in the publisher’s domain. Of journals edited by respondents, 41% (n = 100) did not have the impact factor. For the 40% of journals (n = 98) that had an impact factor, the value was between 0.3 and 8.1, and the highest factors were for journals in the field of medicine.

3.2. Self-Evaluation of Respondents’ Achievements Related to the Journal Quality Assessment Process

As the respondents were experienced editors, most with more than 10 years of experience in scientific publishing (87%, n = 211), they were asked to self-evaluate their past decisions related to journal quality assurance protocols. In an open-ended question, they were asked about measures they implemented in the past that in their opinion had the greatest impact on the overall quality of published material (q31, n = 177, skipped = 81). The question was focused in particular on what they believed to be their main achievements in the area of enhancing the journal’s quality. Responses were coded based on the processes by which editors introduced changes, making up categories. The analysis showed that they were particularly active in seven categories (Table 2). Respondents most often (30%, n = 53) intervened in establishing quality criteria, either for desk rejections or post-review. Approximately the same share (29%, n = 51) reported that they made improvements in managing the peer review process. Respondents also introduced changes in the editorial team (25%, n = 44) and equipped editorial offices with technology and electronic services (12%, n = 21). A small number of respondents focused on revising guidelines and training stakeholders (8%, n = 14), and on strengthening the journal’s author pool (6%, n = 11). Table 2 lists the most common actions listed by respondents within each category.

3.3. Respondents’ Perceptions of the Need for Further Advancement of Particular Quality Assurance Processes

Respondents were asked a closed-ended question regarding areas of editorial work that, in their view, needed to be improved and how critical was the need for improvement (q32, n = 214, skipped = 44). They agreed that these areas were primarily: (a) the overall quality of published articles, (b) the overall quality of peer reviews, and (c) the reviewers’ awareness of the required quality standards (see Figure 1).
Over one-quarter of respondents (26%, n = 56) reported that they assessed a high or very high need to increase the overall quality of published articles, while 33% (n = 71) estimated a moderate need. Similarly, 23% of respondents (n = 49) assessed a high or very high need and 26% (n = 56) a moderate need to increase the quality of peer-review reports. With regard to the reviewers’ level of knowledge of the journal’s quality standards, one-fifth of participants (20%, n = 43) assessed a high or very high need to improve the process. Other areas of editorial work were perceived as moderately less critical in terms of needed changes: 17% of respondents (n = 36) assessed a high or very high need to amend the initial check of manuscripts; equally, 17% (n = 36) assessed a high or very high need to change the way they choose reviewers. A critical need to change the form of peer review (for instance, to single/double blind or open review) was reported by 13% of respondents (n = 28).
A further investigation (bivariate analysis) showed several significant relationships between respondents’ estimation of the critical need for a particular improvement and seven independent variables: the respondent’s gender and seniority (years of working experience in science), the journal’s scientific field, distribution model, and type of peer review in use, the publisher’s commercial orientation, and charging for article processing (Table 3).
The most numerous and statistically significant associations were between commercial orientation of the publisher and dependent variables, followed by journal’s distribution model (open, closed, hybrid) and dependent variables (see Table 4).
Table 5, Table 6 and Table 7 show statistically significant and strong associations between the commercial orientation of journal publishers and respondents’ estimation of the need for changes. Overall, respondents who worked with non-commercial publishers reported a greater need to change implemented quality assessment processes.
As seen in Table 5, 22.4% of respondents (n = 47) perceived a high or very high need to change the quality of peer review reports in the journal. Such need was higher among respondents who worked with non-commercial publishers (35.4%, n = 28) than those who worked with commercial publishers (14.5%, n = 19).
Similarly, as seen in Table 6, respondents who worked with non-commercial publishers (28.6%, n = 22) assessed a higher need to change the selection process of reviewers than to those who worked with commercial publishers (10.1%, n = 13).
Respondents who worked with non-commercial publishers (22.5%, n = 18) also assessed a higher need to change the type of implemented peer review than those who worked with commercial publishers (7.6%, n = 10), as seen in Table 7.
The same can be observed in relation to increasing reviewers’ awareness of required quality standards. The need to intervene was assessed as higher among respondents who worked with non-commercial publishers (27.3%, n = 21) than those who worked with commercial publishers (15.4%, n = 20)   ( χ 2 = 4303, p < 0.05, n = 207).
Respondents who worked with non-commercial publishers (34.6%, n = 24) also assessed a higher need to enhance the overall quality of published papers than those who worked with commercial publishers (21.5%, n = 28)   ( χ 2 = 4286, p < 0.05, n = 208).
The results of bivariant analysis also showed a statistically significant association between the journal’s distribution model (open, closed, hybrid) and the need to change the quality of peer reviews; this need was assessed as higher by respondents who were editors of open access journals (29.1%, n = 32) than by those who oversaw closed access (14.6%, n = 7) or hybrid (10.6 %, n = 5) journals (Table 8).
A statistically significant association was also found between the journal’s scientific field and the respondents’ estimation of the need to change the initial check of manuscripts (Table 9). Respondents who oversaw journals in STEM fields (natural sciences, technology, medical and health sciences, agricultural sciences, engineering, and mathematics) assessed a higher need to change (21.7%, n = 25) than did respondents who were editors of social science and humanities journals (9.9%, n = 8).

3.4. Perceptions of Needed Improvements to Enhance the Overall Quality of the Journal and Handle Anticipated Obstacles

When challenged in an open-ended question about future plans regarding the quality assurance process in their journal (q33, n = 135, skipped = 123), respondents reported planned improvements in four areas: 45% of respondents (n = 61) focused on managing the peer review process, 14% (n = 19) on the journal’s visibility and impact, 12% (n = 16) on setting and enforcing quality standards, and 7% (n = 10) on stakeholder education and networking. On an open-ended question on introducing such changes (q35, n = 160, skipped = 98), respondents reported that they expected to be hampered by diverse factors. Table 10 lists respondents’ planned improvements for and anticipated obstacles to successful implementation of particular advancements.

4. Discussion

By means of a qualitative questionnaire, a detailed account of needed improvements in quality assessment processes within editorial offices was obtained from 258 senior editors from 42 countries across scientific fields.
The findings indicate that respondents put significant effort into journal development, despite being constrained by day-to-day managerial challenges (especially keeping up with standards) and systemic obstacles, such as a lack of time and financial resources. Navigating between these often contradicting roles, as indicated in [19] and confirmed in this study, is complex and challenging. Even though in recent years scholarly attention has been predominantly focused on peer review, with a few qualitative studies also on editorial perceptions of peer review management [20,48], the findings indicate that the scope of editorial tasks is significantly larger, and editors have outgrown the conventional “gatekeeping” role. Besides peer review management taking up a large proportion of editorial (and development) work [19,20], which the findings of this study confirm, respondents listed past advancements in raising awareness regarding standards, developing and training editorial teams, enlarging the pool of collaborating authors, and implementing new technologies and e-services.
Relying on a decade or more of editorial experience, study respondents (n = 177) believed that the most relevant improvements they implemented were in raising or shaping the required quality criteria for manuscripts (30%, n = 53) and managing the peer review process (29%, n = 51).
Respondents influenced the development of desired manuscript quality standards by persistently publishing only those manuscripts that fit the scope of the journal, and rejecting all other (even very high-quality) papers. Desk rejections, however, come with a risk of authors’ future resentment. If a manuscript is rejected after either the initial check or the peer review process, the editorial reasons must be justified and properly explained to the author, stressed respondents. As indicated in [30] and confirmed in this study, publishers and editors feared that inexperienced authors in particular might misinterpret such editorial decisions. Furthermore, the reasons for desk rejection can be deeper, as suggested in [19], where researchers listed several, to some extent controversial reasons for immediate rejections, such as the national and cultural distribution of submissions or a manuscript relying upon literature from a different discipline.
To manage the reviewers’ workload, respondents raised the threshold of required quality for submitting articles to the review process. Respondents took actions to ensure accurate verification of quality in manuscripts and insisted on set standards. This entailed several awareness-raising activities, including raising expectations among stakeholders (especially editorial teams and reviewers) regarding the general level of required quality of published material. To help improve the format and language of articles, respondents implemented support services (format, grammar, and language editing). Respondents believed that providing additional feedback on the quality of the work to authors, reviewers and editorial teams was beneficial, while previous research indicated [30] that honest and independent criticism by peers is important especially for junior researchers [30] or inexperienced researchers [19]. Respondents also noted that this can be upgraded by inviting such stakeholders (especially authors and reviewers) to actively participate in proposing improvements.
There was agreement among respondents that interventions in the peer review process succeed only if they involve team effort. The key was constructive cooperation between the editor (and board of editors), editorial team, reviewers, authors, and publisher (and publisher’s team). Throughout the study [49], the division of roles and tasks between reviewers and editors/editorial team proved to be especially challenging. The understanding of the purpose of peer review by stakeholders is often not aligned and navigation through diverse expectations can be demanding, as pointed out by previous research [20,30,37]. To efficiently manage the review process, respondents took action to increase the number of reviewers (creating and expanding their database of potential reviewers) and closely monitored the workload of individual reviewers. They required effort to accelerate the review process by imposing stricter adherence to deadlines, sending more reminders to reviewers, and taking faster action if reviewers did not respond. However, such actions need to be conducted with caution, since past studies [30] highlighted that inefficient manuscript handling (especially by editors) is among main reasons why reviewers feel overwhelmed by the amount of reviewing that they are requested to do. Too much pressure can also lead to reviewers declining reviews more often and failing to detect errors [29].
Respondents listed several actions that they believed affected the quality of reviews: they ensured transparency and anonymity of reviews, discussed quality standards with reviewers, raised awareness among reviewers about the journal’s quality criteria, and created a review form, a list of questions, guidelines, and clarifications to help reviewers with their work. Among the actions they took in developing standards and raising awareness among stakeholders about the standards (8%, n = 14), respondents paid particular attention to developing standards for peer review and revision of guidelines for authors (and other stakeholders). The need for in-depth revision of reviewing instructions was confirmed also by other studies [30], where respondents (authors, reviewers) agreed that it could to some extent lighten the reviewer’s burden. They also stressed the importance of implementing quality and ethical standards of professional organizations (COPE, DOAJ, etc.). Respondents agreed that implementing review workshops for young professionals also significantly contributed to journal quality.
In [30], a contribution to the creation, curation, and enhancement of the scientific community was recognized by stakeholders as one of the purposes of the peer review process. Respondents reported that they actively engage in various activities of community curation, which are not limited to peer review management. With the intention to build a successful editorial team, respondents (25%, n = 44) carried out a series of actions focused on finding editors/assistants who were experts in specific thematic areas of the journal and further expanded the editorial team with influential, experienced experts as well as young, ambitious collaborators. Respondents were mindful of ensuring national diversity on their editorial teams. They provided strict supervision of the work of team members and encouraged discussions on desired and minimum quality standards.
Respondents were able to enlarge the pool of collaborating authors (6%, n = 11) by liaising with the most established authors in the scientific field (scope) of the journal and inviting them to submit articles for publication. This also involved presentations at conferences and other events in order to attract established authors. Respondents also invited authors from other countries to submit and encouraged submission of manuscripts by younger authors. Respondents believed in the positive outcome of connecting processes and stakeholders; they invited authors to conduct reviews and peer reviewers to join the editorial board. To show recognition, respondents reported that they wrote appreciation letters to authors of the most influential (often cited) articles published by the journal. They also stressed the importance of advertising to the target professional public and mentioned that they closely oversaw the production of advertising content (e-news, social media).
For the purpose of enhancing journal quality, 12% of respondents (n = 21) wrote that they implemented new technologies and e-services. They introduced electronic and online content management systems (article submission management, electronic/online submission system) and online editorial process management. They also implemented electronic systems for detecting plagiarism and checking double submissions. To build the journal’s reputation, respondents obtained indexes (selecting the most influential/reputable indexing systems) and implemented activities to obtain the impact factor.
Even though respondents provided detailed information on their past achievements, the results indicate that there is still a critical need for future improvements in quality assessment processes. Respondents (n = 214) reported a high or very high need to improve the following: amend (raise) the overall quality of published articles (26%, n = 56), increase the quality of peer-review reports (23%, n = 49), increase the reviewers’ level of knowledge of the journal’s required quality standards (20%, n = 43), change the reviewer selection process (17%, n = 36), and change the form of peer review (for instance, to single/double blind or open review) (13%, n = 28).
Bivariate analysis showed several strong positive associations between a critical (high or very high) need to change a particular quality assessment process (dependent variable) and three independent variables; there were five positive associations between dependent variables and the (non-)commercial orientation of the journal publisher of the journal, one between dependent variables and the journal’s distribution model, and one between the dependent variable and the journal’s scientific field.
Overall, the share of respondents who reported a critical need to change implemented quality assessment processes was significantly higher among those who worked with non-commercial publishers (compared to commercial publishers) in the following areas:
  • Changing the quality of peer review reports: respondents working with non-commercial publishers, 35.4% (n = 28); those working with commercial publishers, 14.5% (n = 19) ( χ 2 = 12,438, p < 0.01);
  • Amending the reviewer selection process: respondents working with non-commercial publishers, 28.6%, (n = 22); those working with commercial publishers, 10.1% (n = 13) ( χ 2 = 11,693, p < 0.01);
  • Changing the type of peer review implemented: respondents working with non-commercial publishers, 22.5% (n = 18); those working with commercial publishers, 7.6% (n = 10) ( χ 2 = 9538, p < 0.01);
  • Increasing reviewers’ awareness of required quality standards: respondents working with non-commercial publishers, 27.3% (n = 21); those working with commercial publishers, 15.4% (n = 20) ( χ 2 = 4303, p < 0.05);
  • Enhancing the overall quality of published papers: respondents working with non-commercial publishers, 34.6% (n = 24); those working with commercial publishers, 21.5% (n = 28) ( χ 2 = 4286, p < 0.05).
The results of bivariant analysis also show a statistically significant association ( χ 2 = 8412, p < 0.05) between the journal’s distribution model (open, closed, hybrid) and the need to change the quality of peer review reports. The need to change the quality of peer reviews was higher among editors of open access journals (29.1%, n = 32) than among those who oversaw closed access (14.6%, n = 7) or hybrid (10.6%, n = 5) journals.
A statistically significant association ( χ 2 = 4776, p < 0.05) was also found between the journal’s scientific field and the respondent’s estimation of the need to change the initial check of manuscripts, with a greater need assessed by respondents who oversaw journals in STEMM fields (21.7%, n = 25) than editors of SSH journals (9.9%, n = 8).
While editors were aware of controversial issues for scholarly journals, such as non- transparent business models, predatory publishing practices, and inadequate quality assurance protocols, as discussed in [19], they appeared to focus more on improving their journals and managing their workload. This is confirmed by the present study, while findings also show that the commercial or non-commercial orientation of a journal’s publisher is an important factor that shapes the perceptions of academic editors regarding the need for further advancements in the editorial quality assessment processes. The other two factors, to a far less extent, are the choice of the journal’s distribution model and the journal’s scientific field. The influence of these factors on respondents’ conduct and beliefs is indirect and not widely recognized by respondents. This conclusion provides solid ground for further and detailed investigation on the above-mentioned associations. Since the interpretation of such bivariate analysis is limited, it will be used in further investigation of the mentioned factors in the next phase of the study.
The role of editors as developers, as indicated in [19,30], was further investigated in this study by exploring their aspirations and plans regarding future advancements. Answers related to needed improvements (Table 10) to a large extent corresponded with aspects of journal management that respondents had already amended in the past (see Table 1). This indicates that the past improvements discussed above were only partially successful, and that constant alterations and innovations are needed for a long-term impact.
The study revealed that a certain shift in editors’ attention might be required for the future successful implementation of the advancements. Respondents believed that additional interventions are needed to provide better-quality peer review reports and a speedier peer review process. They listed a few: the overall quality of reviews should be raised, the reports should be more professional, and reviews need to make unbiased criticism and constructive suggestions for manuscript improvements. However, when taking into account obstacles, they named reviewer bias, unfairness, and unethical conduct as serious and pressing constraints on quality. Considering existing evidence on the diverse and sometimes controversial understanding of the division of roles and tasks between peer reviewers and journal editors [19,20,30], respondents did not pay much attention to managing expectations and gaining more insight into the everyday realities of journal reviewers. Rather, respondents approached this issue somewhat mechanically (for instance by setting and enforcing guidelines for reviewers). Respondents believed that reviewers should be provided with more information and support, but they understood their own role to be more on a managerial side: reviewers need to be “pressed to submit reports on time” and the quality of their work should be “checked more consistently”.
There was, however, a strong consensus among respondents about their role as community curators. Scholarly journals are a collaborative endeavor, as stressed by respondents, and they need investment in building a strong community between stakeholders; linking editors, reviewers, and authors; and supporting knowledge transfer within the community. When considering their contribution to the community, the respondents mentioned the facilitation of the stakeholder training and networking activities. Respondents reported that they plan to introduce workshops for reviewers (including mentoring, management, feedback on the quality of reviews, etc.) and training for authors on how to write and how to conduct quality research (thus gaining authors and increasing the quality of their work; the give-and-get principle). The biggest obstacles that respondents expected when implementing these activities are related to a lack of time and money; all stakeholders are overburdened and stakeholder motivation is generally low.
There are a number of limitations in this study. Even though assessing manuscript quality is a collective task, this study was limited to the perceptions of editors. Sampling was focused only on experienced senior editors-in-chief who were members of COPE and DOAJ; hence, the generalization of study findings is also limited in this aspect. The size of the sample was relatively small; the DOAJ database currently has a little less than 17 thousand journals indexed, while COPE has slightly more than 13 thousand journal members. Since respondents reported on their own practices, the results might be influenced by the desire to give socially acceptable answers or by inconsistencies between what editors stated about their quality management practices and how they actually acted or what they experienced in their everyday editorial work. I tried to mitigate such risk by using a combination of open- and closed-ended questions, to detect possible inconsistences in answers. Regarding bivariate analysis, the results does not provide any qualitative data that would offer a credible base to explain the listed associations. The added value of this part of study is, however, that they do offer a solid context for further (qualitative) exploration into the realities of editors who work with commercial and non-commercial publishers and journals with various distribution models (open, hybrid, closed).
The significance of this study is in setting a context for editors’ perception and understanding of their “development” role by revealing what they believe are their biggest achievements and what advancements they plan to introduce in the future. Furthermore, by identifying what editors perceive as major obstacles to planned advancements, this study aids in understanding the constraints to and limits on current quality assessment management. The study’s key findings can offer insights into how these issues can be addressed.

Supplementary Materials

The following are available online at https://0-www-mdpi-com.brum.beds.ac.uk/article/10.3390/publications10010012/s1, Questionnaire S1.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy restrictions.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Beall, J. Predatory publishers are corrupting open access. Nature 2012, 489, 179. [Google Scholar] [CrossRef] [Green Version]
  2. Beall, J. Predatory publishing is just one of the consequences of gold open access. Learn. Publ. 2013, 26, 79–83. [Google Scholar] [CrossRef]
  3. Beall, J. What I learned from predatory publishers. Biochem. Med. 2017, 27, 273–278. [Google Scholar] [CrossRef]
  4. Bohannon, J. Who’s Afraid of Peer Review? Science 2013, 342, 60–65. [Google Scholar] [CrossRef] [PubMed]
  5. Manca, A.; Cugusi, L.; Dragone, D.; Deriu, F. Predatory journals: Prevention better than cure? J. Neurol. Sci. 2016, 370, 161. [Google Scholar] [CrossRef] [PubMed]
  6. Shamseer, L.; Moher, D.; Maduekwe, O.; Turner, L.; Barbour, V.; Burch, R.; Clark, J.; Galipeau, J.; Roberts, J.; Shea, B.J. Potential predatory and legitimate biomedical journals: Can you tell the difference? A cross-sectional comparison. BMC Med. 2017, 15, 28. [Google Scholar] [CrossRef] [Green Version]
  7. Krawczyk, F.; Kulczycki, E. How is open access accused of being predatory? The impact of Beall’s lists of predatory journals on academic publishing. J. Acad. Librariansh. 2021, 47, 102271. [Google Scholar] [CrossRef]
  8. Suber, P. Open Access and Quality. DESIDOC J. Libr. Inf. Technol. 2008, 28, 49–56. [Google Scholar] [CrossRef]
  9. Kyrillidou, M. Research Library Trends: A Historical Picture of Services, Resources, and Spending. Res. Libr. Issues Q. Rep. ARL CNI SPARC 2012, 280, 20–27. [Google Scholar] [CrossRef] [Green Version]
  10. Legge, M. Towards sustainable open access: A society publisher’s principles and pilots for transition. Learn. Publ. 2020, 33, 76–82. [Google Scholar] [CrossRef]
  11. Jurchen, S. Open Access and the Serials Crisis: The Role of Academic Libraries. Tech. Serv. Q. 2020, 37, 160–170. [Google Scholar] [CrossRef]
  12. McGuigan, G.S.; Russel, R. The business of academic publishing: A strategic analysis of the academic journal publishing industry and its impact on the future of scholarly publishing. Electron. J. Acad. Spec. Librariansh. 2008, 9. Available online: https://southernlibrarianship.icaap.org/content/v09n03/mcguigan_g01.html (accessed on 16 September 2021).
  13. Ramello, G.B. Copyright & endogenous market structure: A glimpse from the journal-publishing market. Rev. Econ. Res. Copyr. Issues 2010, 7, 7–29. [Google Scholar]
  14. BOAI-Budapets Open Access Initiative. Budapest Open Access Initiative. 2002. Available online: http://www.budapestopenaccessinitiative.org/read (accessed on 16 September 2021).
  15. Max Planck Gesellschaft. Berlin Declaration on Open Access to Knowledge in the Science and Humanities. 2003. Available online: http://openaccess.mpg.de/Berlin-Declaration (accessed on 16 September 2021).
  16. Shavell, S. Should Copyright of Academic Works be Abolished? J. Leg. Anal. 2010, 2, 301–358. [Google Scholar] [CrossRef]
  17. Miller, J.; Perrucci, R. Back Stage at Social Problems: An Analysis of the Editorial Decision Process, 1993–1996. Soc. Probl. 2001, 48, 93–110. [Google Scholar] [CrossRef]
  18. Hausmann, L.; Murphy, S.P. The challenges for scientific publishing, 60 years on. J. Neurochem. 2016, 139, 280–287. [Google Scholar] [CrossRef]
  19. Acker, S.; Rekola, M.; Wisker, G. Editing a higher education journal: Gatekeeping or development? Innov. Educ. Teach. Int. 2021, 59, 104–114. [Google Scholar] [CrossRef]
  20. Glonti, K.; Boutron, I.; Moher, D.; Hren, D. Journal editors’s perspectives on the roles and tasks of peer reviewers in biomedical journals: A qualitative study. BMJ Open 2019, 9, e033421. [Google Scholar] [CrossRef] [PubMed]
  21. Council of Science Editors. White Paper on Publication Ethics: CSE’s White Paper on Promoting Integrity in Scientific Journal Publications. 2020. Available online: http://www.councilscienceeditors.org/resource-library/editorial-policies/white-paper-on-publication-ethics/ (accessed on 16 September 2021).
  22. ALLEA-All European Academies. The European Code of Conduct for Research Integrity; ALLEA: Berlin, Germany, 2017; Available online: https://allea.org/code-of-conduct/ (accessed on 16 September 2021).
  23. COPE-Committee on Publication Ethics; DOAJ-Directory of Open Access Journals; OASPA-Open Access Scholarly Publishers Association & WAME-World Association of Medical Editors. Principles of Transparency and Best Practice in Scholarly Publishing. 2018. Available online: https://doaj.org/bestpractice (accessed on 16 September 2021).
  24. ICMJE-International Committee of Medical Journal Editors. Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals. Updated December 2019. Available online: http://www.icmje.org/recommendations (accessed on 16 September 2021).
  25. Valkenburg, G.; Dix, G.; Tijdink, J.; de Rijcke, S. Expanding Research Integrity: A Cultural-Practice Perspective. Sci. Eng. Ethics 2021, 27, 10. [Google Scholar] [CrossRef] [PubMed]
  26. Newton, D.P. Quality and Peer Review of Research: An Adjudicating Role for Editors. Account. Res. 2010, 17, 130–145. [Google Scholar] [CrossRef] [Green Version]
  27. Harley, D.; Acord, S.K.; King, C.J. Assessing the Future Landscape of Scholarly Communication: An Exploration of Faculty Values and Needs in Seven Disciplines; University of California: Berkeley, CA, USA, 2010; Available online: https://escholarship.org/uc/item/15x7385g (accessed on 16 September 2021).
  28. Mulligan, A.; Hall, L.; Raphael, E. Peer review in a changing world: An international study measuring the attitudes of researchers. J. Am. Soc. Inf. Sci. Technol. 2013, 64, 132–161. [Google Scholar] [CrossRef]
  29. Elsevier & Sense About Science. Quality, Trust and Peer Review: Researchers Perspectives 10 Years on. 2019. Available online: https://senseaboutscience.org/wp-content/uploads/2019/09/Quality-trust-peer-review.pdf (accessed on 16 September 2021).
  30. Severin, A.; Chataway, J. Purposes of Peer Review: A Qualitative Study of Stakeholder Expectations and Perceptions; SocArXiv: Ithaca, NY, USA, 2020. [Google Scholar] [CrossRef]
  31. Severin, A.; Chataway, J. Overburdening of peer reviewers: A multi-stakeholder perspective on causes and effects. Learn. Publ. 2021, 29, 41–50. [Google Scholar] [CrossRef]
  32. Primack, R.B.; Regan, T.J.; Devictor, V.; Zipf, L.; Godet, L.; Loyola, R.; Maas, B.; Pakeman, R.J.; Cumming, G.; Bates, A.E.; et al. Are scientific editors reliable gatekeepers of the publication process? Biol. Conserv. 2019, 238, 108232. [Google Scholar] [CrossRef]
  33. Jurkat-Rott, K.; Lehmann-Horn, F. Reviewing in science requires quality criteria and professional reviewers. Eur. J. Cell Biol. 2004, 83, 93–95. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Virlogeux, V.; Trépo, C.; Pradat, P. The growing dilemma of peer review: A three-generation viewpoint. Eur. Sci. Ed. 2018, 44, 32–34. [Google Scholar] [CrossRef]
  35. Zaharie, M.A.; Osoian, C.L. Peer review motivation frames: A qualitative approach. Eur. Manag. J. 2016, 34, 69–79. [Google Scholar] [CrossRef]
  36. Warne, V. Rewarding reviewers-sense or sensibility? A Wiley study explained. Learn. Publ. 2016, 29, 41–50. [Google Scholar] [CrossRef] [Green Version]
  37. Vrana, R. Editorial challenges in a small scientific community: Study of Croatian editors. Learn. Publ. 2018, 31, 369–374. [Google Scholar] [CrossRef] [Green Version]
  38. Birgit, S.; Görögh, E. New Toolkits on the Block: Peer Review Alternatives in Scholarly Communication; IOS Press Ebooks: Amsterdam, The Netherlands, 2017; pp. 62–74. [Google Scholar] [CrossRef]
  39. BioMed Central and Digital Science. SpotOn Report: What Might Peer Review Look Like in 2030? 2017. Available online: http://0-events-biomedcentral-com.brum.beds.ac.uk/wp-content/uploads/2017/04/SpotOn_Report_PeerReview-1.pdf (accessed on 16 September 2021).
  40. Tennant, J.P.; Dugan, J.M.; Graziotin, D.; Jacques, D.C.; Waldner, F.; Mietchen, D.; Elkhatib, Y.; Collister, L.B.; Pikas, C.K.; Crick, T.; et al. A multi-disciplinary perspective on emergent and future innovations in peer review [version 3; peer review: 2 approved]. F1000Research 2017, 6, 1151. [Google Scholar] [CrossRef] [PubMed]
  41. Ross-Hellauer, T.; Deppe, A.; Schmidt, B. Survey on open peer review: Attitudes and experience amongst editors, authors and reviewers. PLoS ONE 2017, 12, e0189311. [Google Scholar] [CrossRef] [Green Version]
  42. Heaven, D. AI peer reviewers unleashed to ease publishing grind. Nature 2018, 563, 609–610. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Steps towards transparency in research publishing. Nature 2017, 549, 431. [CrossRef] [PubMed] [Green Version]
  44. Galipeau, J.; Barbour, V.; Baskin, P.; Bell-Syer, S.; Cobey, K.; Cumpston, M.; Deeks, J.; Garner, P.; MacLehose, H.; Shamseer, L.; et al. A scoping review of competencies for scientific editors of biomedical journals. BMC Med. 2016, 14, 16. [Google Scholar] [CrossRef] [Green Version]
  45. Galipeau, J.; Cobey, K.D.; Barbour, V.; Baskin, P.; Bell-Syer, S.; Deeks, J.; Garner, P.; Shamseer, L.; Sharon, S.; Tugwell, P.; et al. An international survey and modified Delphi process revealed editors’s perceptions, training needs, and ratings of competency-related statements for the development of core competencies for scientific editors of biomedical journals [version 1; peer review: 2 approved]. F1000Research 2017, 6, 1634. [Google Scholar] [CrossRef] [PubMed]
  46. Moher, D.; Galipeau, J.; Alam, S.; Barbour, V.; Bartolomeos, K.; Baskin, P.; Bell-Syer, S.; Cobey, K.D.; Chan, L.; Clark, J.; et al. Core competencies for scientific editors of biomedical journals: Consensus statement. BMC Med. 2017, 15, 167. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Turcotte, C.; Drolet, P.; Girard, M. Study design, originality and overall consistency influence acceptance or rejection of manuscripts submitted to the Journal. Can. J. Anaesth./J. Can. D’anesthésie 2004, 51, 549–556. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Mustaine, E.E.; Tewksbury, R. Exploring the Black Box of Journal Manuscript Review: A Survey of Social Science Journal Editors. J. Crim. Justice Educ. 2013, 24, 386–401. [Google Scholar] [CrossRef]
  49. Krapež, K. Editors’ responsibility for publishing high-quality research results: Worldwide study into current challenges in quality assessment processes. LeXonomica 2022, forthcoming. [Google Scholar]
  50. Palinkas, L.A.; Horwitz, S.M.; Green, C.A.; Wisdom, J.P.; Duan, N.; Hoagwood, K. Purposeful Sampling for Qualitative Data Collection and Analysis in Mixed Method Implementation Research. Adm. Policy Ment. Health Ment. Health Serv. Res. 2015, 42, 533–544. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  51. Squazzoni, F.; Bravo, G.; Farjam, M.; Marusic, A.; Mehmani, B.; Willis, M.; Birukou, A.; Dondio, P.; Grimaldo, F. Peer review and gender bias: A study on 145 scholarly journals. Sci. Adv. 2021, 7, eabd0299. [Google Scholar] [CrossRef]
  52. Helmer, M.; Schottdorf, M.; Neef, A.; Battaglia, D. Gender bias in scholarly peer review. eLife 2017, 6, e21718. [Google Scholar] [CrossRef] [PubMed]
  53. Gollins, C.; Shipman, A.; Murrell, D. A study of the number of female editors-in-chief of dermatology journals. Int. J. Women’s Dermatol. 2017, 3, 185–188. [Google Scholar] [CrossRef] [PubMed]
  54. Pinho-Gomes, A.-C.; Vassallo, A.; Thompson, K.; Womersley, K.; Norton, R.; Woodward, M. Representation of Women Among Editors in Chief of Leading Medical Journals. JAMA Netw. Open 2021, 4, e2123026. [Google Scholar] [CrossRef] [PubMed]
  55. Amrein, K.; Langmann, A.; Fahrleitner-Pammer, A.; Pieber, T.R.; Zollner-Schwetz, I. Women Underrepresented on Editorial Boards of 60 Major Medical Journals. Gend. Med. 2011, 8, 378–387. [Google Scholar] [CrossRef]
  56. Gehman, J.; Glaser, V.; Eisenhardt, K.M.; Gioia, D.; Langley, A.; Corley, K.G. Finding Theory–Method Fit: A Comparison of Three Qualitative Approaches to Theory Building. J. Manag. Inq. 2018, 27, 284–300. [Google Scholar] [CrossRef]
Figure 1. Respondents’ perceptions of need for improvement in particular quality assessment processes (q32, n = 214, skipped = 44).
Figure 1. Respondents’ perceptions of need for improvement in particular quality assessment processes (q32, n = 214, skipped = 44).
Publications 10 00012 g001
Table 1. Respondent characteristics.
Table 1. Respondent characteristics.
Criterion and No. of Question in Questionnaire (q)N (Answered)Description
Years of working experience in science (q6)255Less than 10 years (n = 34, 13%), 11–20 years (n = 54, 21%), more than 20 years (n = 167, 66%)
Gender (q2)258Female (n = 49, 19%), male (n = 209, 81%),
Academic field of journal (FRASCATI classification) (q9)243Natural sciences (n = 49, 20%), engineering and technology (n = 14, 6%), medical and health sciences (n = 56, 23%), agricultural sciences (n = 10, 4%), social sciences (n = 76, 31%), humanities (n = 21, 9%), cross-disciplinary (n = 17, 7%)
Location of editor (q3)258USA (n = 57, 22%), UK (n = 25, 10%), Germany (n = 18, 7%), Italy (n = 13, 5%), Turkey (n = 11, 4%), Poland, Netherlands, Russia (each n = 10, 4%), Brazil (n = 9, 4%), Canada and Norway (each n = 7, 3%), Indonesia and Ukraine (each n = 6, 2%), Australia, Pakistan, Sweden, Switzerland (each n = 5, 2%), other Europe (n = 32, 12%), other Asia (n = 10, 4%), other Africa (n = 4, 2%), other America (n = 2, 1%)
Type of peer review in use (q16)243Single-blind (n = 91, 37%), double-blind (n = 129, 53%), open peer review (n = 4, 2%), peer review not blinded (n = 7, 3%), other (n = 9, 4%)
Type of publisher (q10)243Commercial: Reed Elsevier, Springer, Wiley Blackwell (n = 88, 36%), other commercial (n = 57, 23.5%), non-commercial (n = 95, 39%)
Charging authors for article processing (q13)243Yes (n = 50, 21%), no (n = 170, 70%), authors can choose to pay, publishing options, open choice, etc. (n = 23, 9.5%)
Journal distribution model (q16)243Closed access (n = 58, 24%), open access—pure (n = 127, 52%), hybrid—partial OA (n = 13, 5%), hybrid—retrospective/delayed OA (n = 5, 2%), hybrid model—open choice (n = 32, 13%), other (n = 5, 2%)
Journal listed in Journal Citation Report (q15)243No (n = 100, 41%), unsure/do not know (n = 42, 17%), yes (n = 98, 40%), impact factor between 0.3 and 8.1 (meridian IF = 2.3), highest factors in field of medicine)
Table 2. Improvements in quality assessment processes implemented by journal editors.
Table 2. Improvements in quality assessment processes implemented by journal editors.
Targeted Process
(Category/Code)
Share (in %) *Examples of Commonly Mentioned Amendments/Achievements
Setting standards for manuscript’s initial check and shaping quality criteria30%
n = 53
-
Insisted on publishing only articles that fall within journal’s scope, were more rigorous about rejecting all others
-
Increased percentage of immediate rejections (desk rejections) or raised threshold for submission to review process (reviewer fatigue prevention)
-
Were more rigorous (stricter, uncompromising, more precise) about verification of quality criteria in articles and adhered to set standards
-
Changed (raised or shaped) quality criteria, raised required level of quality
-
Introduced formatting and grammar editing services
-
Provided feedback to (all) stakeholders on quality of their work and made suggestions for improvement
Ensuring rigorous peer review process 29%
n = 51
-
Expanded database of potential reviewers or created new database
-
Accelerated review process (stricter adherence to deadlines, more warnings to reviewers, faster action in case of non-response of reviewer)
-
Increased number of reviewers for each manuscript (from two to three or more)
-
Ensured transparency of review by introducing open review process
-
Introduced stricter criteria for ensuring anonymity (for blind reviews)
-
Encouraged discussion of quality standards with reviewers, raised awareness among reviewers about quality criteria (instructions, recommendations)
-
Provided (designed) a review form to assist reviewers in assessing quality (indicating key criteria for assessing quality of articles)
-
Monitored workload of individual reviewers
Building high-professional editorial team25%
n = 44
-
Enlarged editorial team with recognized and experienced assistant editors who are experts in particular thematic areas and young, ambitious collaborators
-
Ensured national diversity on editorial team
-
Introduced detailed scientific background check of members of editorial team and ensured high level of professionalism
-
Encouraged discussion of desirable and minimum quality standards among editorial team
Introducing e-services and technological improvements 12%
n = 21
-
Introduced online content management system (including article submission management, e-submission system)
-
Implemented e-systems for plagiarism detection and verification of double submission
-
Obtained indexes (critically, choosing most influential/reputable indexing system)
-
Encouraged activities to obtain impact factor/maintain or raise level of impact factor in field of journal’s influence/impact
Introducing guidelines, new standards, and training 8%
n = 14
-
Revised authors’ guidelines
-
Developed peer review related standards
-
Introduced new/more precise definition of scope and key thematic areas of journal
-
Implemented standards of professional organizations (COPE, etc.)
-
Introduced peer-review workshops for young professionals
-
Provided feedback to reviewers (regarding quality of their reviews)
-
Implemented ethical standards
Strengthening journal’s author pool6%
n = 11
-
Presented and advertised at conferences and other events in order to attract more established authors
-
Connected with established authors in fields covered by journal
-
Invited authors from other (underrepresented) countries
-
Encouraged and provided feedback to younger authors, including clear guidelines on improving manuscripts
-
Searched for new topics or insufficiently researched topics and matched them with potential authors
-
Encouraged stakeholder networking, and invited authors to review work and peer reviewers to join editorial board
-
Advertised to targeted professional public
-
Created new advertising content: e-news, active profiles on social media, etc.
-
Wrote thank-you letters to authors who published articles with high impact
* Q31, n = 177, skipped = 81. Table lists share of respondents (%) who indicated each category. If respondents listed several actions that fell into different categories, their responses were considered in all categories to which they referred. Respondents often mentioned a combination of actions in different areas.
Table 3. List of independent and dependent variables for bivariate analysis.
Table 3. List of independent and dependent variables for bivariate analysis.
Independent VariableDescriptionValue
genderRespondent’s gender 0 = Male, 1 = Female
exp20Respondent’s working experience
in science
0 = Less than 20 years, 1 = 20 years or more
areaJournal’s scientific field0 = STEMM fields, 1 = SHH fields
commCommercial orientation of publisher0 = Non-commercial, 1 = Commercial
modelJournal’s distribution model1 = Closed, 2 = Open, 3 = Hybrid
chargeArticle processing charges 0 = No, 1 = Yes
typeType of peer review1 = Single blind, 2 = Double blind, 3 = Other
Dependent VariableDescriptionValue
ntc_initialcNeed to change initial check of manuscripts0 = Not high (moderate, low, very low, no need)
1 = High and very high
ntc_typeNeed to change type of peer review in use
ntc_selpNeed to change selection process of reviewers
ntc_revaNeed to change reviewers’ awareness of required quality standards
ntc_qrevNeed to change quality of peer review
ntc_qpapNeed to change overall quality of published papers
Table 4. Chi-square test for individual bivariate analysis.
Table 4. Chi-square test for individual bivariate analysis.
GenderExp20AreaCommModelChargeType
ntc_initialc0.0743.108 *4.776 **3.505 *3.3440.0153.134
ntc_type1.3750.1790.9029.538 ***5.177 *0.7930.164
ntc_selp0.5510.0320.70211.693 ***4.951 *0.1090.686
ntc_reva2.901 *0.0372.1864.303 **2.14101.262
ntc_qrev2.6720.5430.64112.438 ***8.412 **0.0280.825
ntc_qpap0.0360.2790.4844.286 **0.6731.5530.437
p-value: *** p < 0.01, ** p < 0.05, * p < 0.1.
Table 5. Association between publisher’s commercial orientation and respondent’s estimation of need to change quality of peer review reports.
Table 5. Association between publisher’s commercial orientation and respondent’s estimation of need to change quality of peer review reports.
Need to Change Quality of Peer ReviewsNon-Commercial PublishersCommercial PublishersTotal
Not high (moderate, low, very low, no need)51112163
64.6%85.5%77.6%
High and very high281947
35.4%14.5%22.4%
Total79131210
100%100%100%
χ 2 = 12,438, p < 0.01.
Table 6. Association between publisher’s commercial orientation and respondent’s estimation of need to change reviewer selection process.
Table 6. Association between publisher’s commercial orientation and respondent’s estimation of need to change reviewer selection process.
Need to Change Selection Process of ReviewersNon-Commercial PublishersCommercial PublishersTotal
Not high (moderate, low, very low, no need)55116171
71.4%89.9%83.0%
High and very high221335
28.6%10.1%17.0%
Total77129206
100.0%100%100%
χ 2 = 11,693, p < 0.01.
Table 7. Association between publisher’s commercial orientation and respondent’s estimation of need to change type of peer review in use.
Table 7. Association between publisher’s commercial orientation and respondent’s estimation of need to change type of peer review in use.
Need to Change Type of Peer Review in UseNon-Commercial PublishersCommercial PublishersTotal
Not high (moderate, low, very low, no need)62121183
77.5%92.4%86.7%
High and very high181028
22.5%7.6%13.3%
Total80131211
100%100%100%
χ 2   = 9538, p < 0.01.
Table 8. Association between journal’s distribution model (closed, opened, hybrid) and respondent’s estimation of need to change quality of peer review reports.
Table 8. Association between journal’s distribution model (closed, opened, hybrid) and respondent’s estimation of need to change quality of peer review reports.
Need to Change Quality of Peer ReviewsClosed ModelOpen ModelHybrid ModelTotal
Not high (moderate, low, very low, no need)417842161
85.4%70.9%89.4%78.5%
High and very high732544
14.6%29.1%10.6%21.5%
Total4811047205
100.0%100.0%100.0%100.0%
χ 2 = 8412, p < 0.05.
Table 9. Association between journal’s scientific field and respondents’ estimation of need to change initial check of manuscripts.
Table 9. Association between journal’s scientific field and respondents’ estimation of need to change initial check of manuscripts.
Need to Change Initial Check of ManuscriptsSTEM Fields *Social Sciences and HumanitiesTotal
Not high (moderate, low, very low, no need)9073163
73.7%90.1%78.1%
High and very high25833
21.7%9.9%16.8%
Total11581196
100%100%100%
χ 2 = 4776, p < 0.05; * STEM fields: natural sciences, technology, medical and health sciences, agricultural sciences, engineering, and mathematics.
Table 10. Respondents’ perceptions of needed improvements to enhance overall quality of journal and anticipated obstacles.
Table 10. Respondents’ perceptions of needed improvements to enhance overall quality of journal and anticipated obstacles.
Area for Improvement *Description of Improvement
(and/or Problem Addressed by a Particular Improvement)
Anticipated Obstacles
Peer review process (n = 61, 45%)
-
Ensure higher-quality peer review reports (by demanding more professionalism, providing constructive suggestions for corrections, ensuring objectivity)
-
Increase reviewers’ level of awareness regarding journal’s quality standards
-
Speed up review process, ensuring that reviewers submit reports on time
-
Improve processes for identifying new reviewers (introduction/management of e-databases)
-
Carry out more rigorous selection of reviewers/review the quality of their work
-
Find a way to motivate reviewers (rewards/incentives)
-
Provide more information and support to reviewers
-
Reviewer fatigue/work overload (especially for reviewers who work well and are professional)
-
Slowness of reviewers (reviewers do not submit reviews within deadlines or do not deliver at all)
-
Too many articles received for review
-
Lack of an appropriate and effective way of rewarding reviewers and incentivizing unmotivated reviewers
-
No funds to pay reviewers
-
Low professionalism of reviewers (bias, unfairness, lack of ethics, etc.)
-
Reviewers unaware of desired quality of review and quality criteria applied by journal’s editor
Journal’s visibility and impact
(n = 19, 14%)
-
Increase visibility of journal among target professional public
-
Establish/consolidate cooperation with recognizable, established authors
-
Introduce distribution model that will ensure visibility and impact
-
Obtain/raise citation indexes (influence factor, etc.)
-
Lack of indexes
-
Authors’ perception of journal’s quality (if journal is seen as low/medium quality, it is not interesting for established authors, whose publication would help to increase impact/visibility)
-
Need to reject large number of submitted articles (lack of time for explanations, resulting in author discouragement)
-
Distribution model (readers/users find it difficult to access articles, or they are not freely accessible)
-
Language (if journal is not published in English)
Setting and enforcing quality standards (n = 16, 12%)
-
More detailed initial check (perform all necessary and in-depth checks)
-
Higher overall quality of published articles
-
More rigorous verification of quality criteria, raising the threshold (for all criteria)
-
Constant increase in number of submitted manuscripts
-
Low level of quality of manuscripts received for review
-
Work overload of editorial team (resulting from too many manuscripts)
-
Predominance of quantity over quality, which requires authors to publish as much as possible
-
Pressure (by employers) on young authors to publish as much as possible
Stakeholders’ training
and networking (n = 10, 7%)
-
Established community of stakeholders (connecting editors, reviewers, and authors) encouraging and facilitating a mutual transfer of knowledge
-
Awareness raised among reviewers and authors about the importance of indexing
-
More training on how to write and how to conduct valid and high-quality research, while attracting authors and increasing the quality of their work (give-and-take principle)
-
Workshops/trainings provided for reviewers (including mentoring, leadership, feedback on quality of reviews, etc.)
-
Lack of time (stakeholders are overworked)
-
Low motivation of stakeholders
-
Lack of money (stakeholders do not receive financial compensation)
-
Significant lack of other mechanisms (except financial) for rewarding stakeholders (especially reviewers; often also editorial teams)
* Total number of responses to open-ended questions about further relevant changes (q33, n = 135, skipped = 123) and about anticipated obstacles (q35, n = 160, skipped = 98). Table lists share of respondents (%) who indicated each category. If respondents listed several actions that fell into different categories, their responses were considered in all categories to which they referred. Respondents often mentioned combinations of actions in different areas.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Krapež, K. Advancing Self-Evaluative and Self-Regulatory Mechanisms of Scholarly Journals: Editors’ Perspectives on What Needs to Be Improved in the Editorial Process. Publications 2022, 10, 12. https://0-doi-org.brum.beds.ac.uk/10.3390/publications10010012

AMA Style

Krapež K. Advancing Self-Evaluative and Self-Regulatory Mechanisms of Scholarly Journals: Editors’ Perspectives on What Needs to Be Improved in the Editorial Process. Publications. 2022; 10(1):12. https://0-doi-org.brum.beds.ac.uk/10.3390/publications10010012

Chicago/Turabian Style

Krapež, Katarina. 2022. "Advancing Self-Evaluative and Self-Regulatory Mechanisms of Scholarly Journals: Editors’ Perspectives on What Needs to Be Improved in the Editorial Process" Publications 10, no. 1: 12. https://0-doi-org.brum.beds.ac.uk/10.3390/publications10010012

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop