In Peer Review One Typically Seeks Input From

  • Research commodity
  • Open Access
  • Published:

What feedback practise reviewers give when reviewing qualitative manuscripts? A focused mapping review and synthesis

  • 8647 Accesses

  • 7 Citations

  • 34 Altmetric

  • Metrics details

Abstract

Background

Peer review is at the eye of the scientific procedure. With the advent of digitisation, journals started to offer electronic articles or publishing online only. A new philosophy regarding the peer review process found its way into academia: the open peer review. Open peer review equally proficient by BioMed Fundamental (BMC) is a type of peer review where the names of authors and reviewers are disclosed and reviewer comments are published alongside the commodity. A number of articles have been published to assess peer reviews using quantitative enquiry. However, no studies exist that used qualitative methods to analyse the content of reviewers' comments.

Methods

A focused mapping review and synthesis (FMRS) was undertaken of manuscripts reporting qualitative enquiry submitted to BMC open admission journals from ane Jan – 31 March 2018. Free-text reviewer comments were extracted from peer review reports using a 77-item nomenclature system organised co-ordinate to three fundamental dimensions that represented mutual themes and sub-themes. A ii stage analysis process was employed. First, frequency counts were undertaken that allowed revealing patterns across themes/sub-themes. 2d, thematic assay was conducted on selected themes of the narrative portion of reviewer reports.

Results

A total of 107 manuscripts submitted to nine open-access journals were included in the FMRS. The frequency analysis revealed that among the 30 virtually frequently employed themes "writing criteria" (dimension Two) is the top ranking theme, followed by comments in relation to the "methods" (dimension I). Besides that, some results advise an underlying quantitative mindset of reviewers. Results are compared and contrasted in relation to established reporting guidelines for qualitative inquiry to inform reviewers and authors of frequent feedback offered to raise the quality of manuscripts.

Conclusions

This FMRS has highlighted some important bug that hold lessons for authors, reviewers and editors. We advise modifying the current reporting guidelines by including a further item called "Degree of data transformation" to prompt authors and reviewers to make a judgment well-nigh the appropriateness of the caste of data transformation in relation to the chosen analysis method. Besides, nosotros suggest that completion of a reporting checklist on submission becomes a requirement.

Peer Review reports

Background

Peer review is at the eye of the scientific procedure. Reviewers independently examine a submitted manuscript and then recommend acceptance, rejection or – most oftentimes – revisions to be made before it gets published [1]. Editors rely on peer review to brand decisions on which submissions warrant publication and to enhance quality standards. Typically, each manuscript is reviewed by two or three reviewers [2] who are chosen for their knowledge and expertise regarding the subject or methodology [iii]. The history of peer review, oftentimes regarded as a "touchstone of modern evaluation of scientific quality" [4] is relatively brusque. For instance, the British Medical Journal (at present the BMJ) was a pioneer when it established a system of external reviewers in 1893. But it was in the second half of the twentieth century that employing peers as reviewers became custom [v]. Then, in 1973 the prestigious scientific weekly Nature introduced a rigorous formal peer review system for every paper information technology printed [6].

Despite ever-growing concerns nigh its effectiveness, fairness and reliability [iv, vii], peer review as a central part of bookish cocky-regulation is withal considered the best bachelor practice [eight]. With the advent of digitisation in the tardily 1990s, scholarly publishing has inverse dramatically with many journals starting to offering impress as well as electronic manufactures or publishing online only [9]. The latter category includes for-profit journals such as BioMed Key (BMC) that have been online since their inception in 1999, with an ever evolving portfolio of currently over 300 peer-reviewed journals.

Equally compared to traditional print journals where individuals or libraries need to pay a fee for an almanac subscription or for reading a specific article, open access journals such every bit BMC, PLoS 1 or BMJ Open are permanently free for everyone to read and download since the price of publishing is paid past the author or an entity such as the university. Many, only non all, open admission journals impose an article processing charge on the writer, also known as the golden open access road, to comprehend the price of publication. Depending on the journal and the publisher, article processing charges can range significantly between US$100 and United states of america$5200 per commodity [10, 11].

In the digital age, a new philosophy regarding the peer review process constitute its style into academia, questioning the anonymity of the closed system of peer-review as contrary to the demands for transparency [1]. The issue of reviewer bias, specially concerning gender and affiliation [12], led not just to the establishment of double-blind review but also to its extreme reverse: the open up peer review system [eight]. Although the term 'open up peer review' has no standardised definition, scholars employ the term to indicate that the identities of the authors and reviewers are disclosed and that reviewer reports are openly bachelor [13]. In the tardily 1990s, the BMJ inverse from a closed system of peer review to an open organisation [14, xv]. During the same time, other publishers such as some journals in BMC followed the instance of opening up their peer review.

While peer review reports have long been hidden from the public gaze [16, 17], opening up the closed peer review organization allows researchers to access reviewer comments, thus making it possible to study them. Since then, a number of articles have been published to assess reviews using quantitative research methods. For example, Landkroon et al. [18] assessed the quality of 247 reviews of 119 original articles using a 5-point Likert scale. Similarly, Henly and Dougherty [xix] adult and applied a grading scale to assess the narrative portion of 464 reviews of 203 manuscripts using descriptive statistics. The retrospective cohort study by van Lent et al. [20] assessed peer review comments on drug trials from 246 manuscripts to investigate whether there is a human relationship between the content of these comments and sponsorship using a generalised linear mixed model. Most recently, Davis et al. [21] evaluated reviewer grading forms for surgical journals with higher impact factors and compared them to surgical journals with lower bear on factors using Fisher'southward verbal examination.

Despite the readily available reviewer comments that are published aslope the final article of many open access journals, to the best of our knowledge no studies exist to engagement that used – too quantitative methods – also qualitative methods to analyse the content of reviewers' comments. Identifying (negative) reviewer comments will assist authors to pay particular attention to these aspects and help prospective qualitative researchers to understand the near common pitfalls when preparing their manuscript for submission. Thus, the aim of the study was to appraise the quality and nature of reviewers' feedback in lodge to understand how reviewers appoint with and influence the development of a qualitative manuscript. Our focus on qualitative research tin exist explained by the fact that we are passionate qualitative researchers with a history in determining the state of qualitative research in health and social science literature [22]. The following inquiry questions were answered: (1) What are the frequencies of certain commentary types in manuscripts reporting on qualitative inquiry? and (2) What are the nature of reviewers' comments made on manuscripts reporting on qualitative research?

Methods

Nosotros conducted a focused mapping review and synthesis (FMRS) [22,23,24,25]. Almost forms of review aim for breadth and exhaustive searches, but the FMRS searches inside specific, pre-adamant journals. While Platt [26] observed that 'a number of studies have used samples of journal manufactures', the distinctive characteristic of the FMRS is the purposive pick of journals. These are called for their likelihood to incorporate manufactures relevant to the field of inquiry – in this case qualitative enquiry published in open access journals that operate an open peer-review process that involves posting the reviewer'southward reports. Information technology is these reports that we take analysed using thematic analysis techniques [27].

Currently there are over 70 BMC journals that accept adopted open up peer-review. The FMRS focused on reviewers' reports published during the first quarter of 2018. Journals were selected using a 3-stage process. Starting time, nosotros produced a list with all BMC journals that operate an open peer review process and will publish qualitative research articles (n = 62). 2nd, from this list nosotros selected journals that are general fields of practise and not-illness specific (northward = 15). Third, to ensure a sufficient number of qualitative articles, nosotros excluded journals with less than 25 hits on the search term "qualitative" for the yr 2018 (search date: sixteen July 2018) because chances were considered too slim to contain sufficient articles of interest. At the end of the selection process, the following nine BMC journals were included in our synthesis: (one) BMC Complementary and Alternative Medicine, (2) BMC Family Practice, (3) BMC Health Services Research, (4) BMC Medical Education, (5) BMC Medical Ideals, (half-dozen) BMC Nursing, (7) BMC Public Health, (viii) Health Research Policy and Systems, and (nine) Implementation Science. Since these journals correspond different subjects, a variety of qualitative papers written for different audiences was captured. Every article published within the timeframe was scrutinised against the inclusion and exclusion criteria (Table one).

Table ane Inclusion and exclusion criteria

Full size tabular array

Development of the information extraction sheet

A validated instrument for the nomenclature of reviewer comments does not exist [20]. Hence, a detailed nomenclature organization was adult and pilot tested considering previous research [20]. Our newly developed data extraction canvass consists of a 77-item classification organization organised according to three dimensions: (1) scientific/technical content, (2) writing criteria/representation, and (3) technical criteria. It represents themes and sub-themes identified past reading reviewer comments from twelve articles published in open peer-review journals. For the development of the data extraction sail, we randomly selected four articles containing qualitative research from each of the following three journals published between 2017 and 2018: BMC Nursing, BMC Family unit Practice and BMJ Open. Nosotros so analysed the reviews of manuscripts by systematically coding and categorising the reviewers' gratuitous-text comments. Post-obit the recommendation by Shashok [28], we initially organised the reviewer'due south comments along two primary dimensions, i.e., scientific content and writing criteria. Shashok [28] argues that when peer reviewers confuse content and writing, their feedback can exist misunderstood by authors who may alter texts in unintentional ways to the detriment of the manuscript.

To check the comprehensiveness of our classification system, conditional themes and sub-themes were piloted using reviewer comments nosotros had previously received from twelve of our own manuscripts that had been submitted to journals that operate blind peer-review. We wanted to business relationship for potential differences in reviewers' feedback (open vs. blind review). Every bit a upshot of this quality enhancement procedure, three sub-themes and a farther dimension ('technical criteria') were added. For reasons of clarity and comprehensibility, the dimension 'scientific content' was subdivided following the IMRaD construction. IMRaD is the almost common organisational structure of an original research article comprising Introduction, Methods, Results and Discussion [29]. Anchoring examples were provided for each theme/sub-theme. To account for reviewer comments unrelated to the IMRaD structure, a sub-category called 'generic codes' was created to collect more than full general comments. When reviewer comments could not be assigned to any of the existing themes/sub-themes, they were noted every bit "Miscellaneous". Tabular array 2 shows the concluding data extraction sheet including anchoring examples.

Table 2 Data extraction sheet used to extract free text comments from the reviewer'due south report

Full size table

Information extraction procedure

Information extraction was accomplished by six doctoral students (coders). On average, each coder was allocated 18 articles. After reading the reviews, coders independently classified each annotate using the classification system. In line with Twenty-four hours et al. [30] a reviewer comment was defined as "a singled-out statement or idea found in a review, regardless of whether that statement was presented in isolation or was included in a paragraph that contained several statements." Editor comments were not included. Reviewers' comments were copied and pasted into the most advisable particular of the nomenclature arrangement following a set of pre-defined guidelines. For example, a reviewer annotate could merely be coded once past assigning it to the virtually appropriate theme/sub-theme. A split data extraction sheet was used for each commodity. For the purpose of calibration, the outset completed information extraction sheet from each coder together with the reviewer's comments was sent to the study coordinator (ORH) who provided feedback on classifying the reviewer comments. The aim of the scale was to ensure that all coders were working within the same parameters of understanding, to talk over the subtleties of the sentence process and create consensus regarding classifications. Although the assignment to specific themes/sub-themes is, by nature, a subjective procedure, difficult to assign comments were classified following give-and-take and agreement between coder and study coordinator to ensure reliability. In one case all data extraction was completed, ii experienced qualitative researchers (CB-J, JT) independently undertook a farther calibration exercise of a random sub-sample of 20% of manufactures (n = 22) to ensure consistency across coders. Articles were selected using a random number generator. For these 22 articles, nomenclature discrepancies were resolved by consensus between coders and experienced researchers. Finally, all individual data extraction sheets were collated to create a comprehensive Excel spreadsheet with over 8000 cells that allowed tallying the reviewer's comments across manuscripts for the purpose of information assay. For each manuscript, a reviewer could take several remarks related to ane type of annotate. All the same, each type of annotate was scored only once per category.

Finally, reviewer comments were 'quantitized' [31] by applying programming language (Python) to Jupyter Notebook, an open-source web application, to perform frequency counts of free-text comments regarding the 77 items. Among other data manipulation, nosotros sorted elements of arrays in descending order of frequency using Pandas, counted the number of studies in which a certain theme/sub-theme occurred, conducted distinct word searches using NLTK 3 or grouped data according to certain criteria. The calculation of frequencies is a way to unite the empirical precision of quantitative research with the descriptive precision of qualitative research [32]. This quantitative transformation of qualitative information allowed extracting more meaning from our spreadsheet through revealing patterns across themes/sub-themes, thus giving indicators about which of them to analyse using thematic assay.

Results

A total of 109 manuscripts submitted to 9 open up-access journals were included in the FMRS. When scrutinising the peer review reports, we noticed that on i occasion the reviewer'southward comments were missing [33]. For the remaining 108 manuscripts, reviewer comments were accessible via the journal's pre-publication history. On close inspection, however, it became credible that ane commodity did not incorporate qualitative research, thus leaving ultimately 107 articles to work with (supplementary file). Because that each manuscript could potentially be reviewed by multiple reviewers and underwent at to the lowest degree ane round of revision, the full number of reviewer reports analysed amounted to 347 containing collectively 1703 reviewer comments. The level of inter-rater understanding for the 22 articles included in the calibration practise was 97%. Disagreement was, for example, in relation to coding a comment every bit "miscellaneous" or as "confirmation/approval (from reviewer)". For xviii out of 22 articles, there was 100% agreement for all types of comments.

Variation in number of reviewers

The number of reviewers invited by the editor to review a submitted manuscript varied greatly inside and amongst journals. While the majority of manuscripts across journals had been reviewed past two to three reviewers, there were also significant variations. For example, the manuscript submitted to BMC Medical Educational activity by Burgess et al. [34] had been reviewed past five reviewers whereas the manuscript submitted to BMC Public Health past Lee and Lee [35] had been reviewed past one reviewer only. Even within journals in that location was a huge variation. Amid our sample, BMC Public Wellness had the greatest variance ranging from 1 to iv reviewers. Besides, it was noted that additional reviewers were called in non until the second or even third revision of the manuscript. A summary of key information on journals included in the FMRS is provided in Table 3.

Table 3 Summary of key information on open access journals included in the FMRS

Full size tabular array

"Quantitizing" reviewer comments

The frequency analysis revealed that the number of articles in which a certain theme/sub-theme occurred ranged from one to 79. Across all 107 manufactures, the types of comments most oft reported were in relation to generic themes. Reviewer comments regarding "Adding data/detail/nuances", "Clarification needed", "Farther caption required" and "Confirmation/approval (from reviewer)" were used in 79, 79, 66 and 63 articles, respectively. The four most frequently used themes/sub-themes are composed of generic codes from dimension I ("Scientific/technical content"). Leaving all generic codes bated, it became credible that amongst the 30 most frequently employed themes "Writing criteria" (dimension 2) is the top ranking theme, followed by comments in relation to the "Methods" (dimension I) (Table 4).

Table 4 The 30 nearly frequently used themes reviewers provided feedback to (in descending order)

Full size tabular array

Subsequently, we present key qualitative findings regarding "Confirmation/approving from reviewers" (generic), "Sampling" and "Analysis process" (methods), "Robust/rich data assay and "Themes/sub-themes" (results) also as findings that advise an underlying quantitative mindset of the reviewers.

Confirmation/approval from reviewers (generic)

The theme "confirmation/approval from reviewers" ranks third amidst the top 30 categories. A full of 63 manuscripts contained at least one reviewer annotate related to this theme. Overall, reviewers maintained a respectful and affirmative rhetoric when providing feedback. The vast majority of reviewers began their report past stating that the manuscript was well written. The following is a typical example:

"Overall, the newspaper is well written, and theoretically informed." Article #14.

Reviewers then continued to add together explicit praise for aspects or sections that were particularly innovative and/or well synthetic before they started to put forward any negative feedback.

Sampling (methods)

Across all 107 articles there were 34 reviewer comments in relation to the sampling technique(southward). Two major categories were identified: (1) composition of the sample and (2) identification and justification of selected participants. Regarding the former, reviewers raised several concerns about how the sample was composed. For instance, one reviewer wanted to know the reason for female predominance in the report and why an unabridged focus group was composed of females only. Some other reviewer expressed strong criticism on the composition of the sample since only young, educated and non-minority white British participants were included in the study. The reviewer commented:

"So a typical patient was immature, educated and non-minority White British? The enquiry studies these days should exist inclusive of various types of patients and excluding patients because of their age and ethnicity is extremely concerning to me. This assumption that these individuals will "observe information technology more difficult to consummate questionnaires" is apropos" Commodity #40.

This raised concerns of potentially excluding important diverse perspectives – such every bit extreme or deviant cases – from other participants. Similarly, some reviewers expressed concerns that relevant groups of people were not interviewed, calling into question that the findings were theoretically saturated. In terms of the identification of participants, reviewers raised questions regarding how the authors obtained the necessary characteristics to achieve purposive sampling or why only sure groups of people were included for interviews. Besides that, reviewers criticised that some authors did not mention their inclusion/exclusion criteria for selecting participants or did not specify their sampling method. For example:

"The authors state that they recruited a purposive sample of patients for the interviews. Concerning which variables was this sampling purposive? Are there any studies informing the patient selection process?" Article #61.

Hence, reviewers requested more detailed information on how participants were selected and to clearly land the type of sampling. Apart from the two fundamental categories, reviewers fabricated additional comments in relation to data saturation, transferability of findings, limitations of certain sampling methods and criticised the lack of description of participants who were approached only refused to participate in the study.

Details of analysis process (methods)

In 60 out of 107 articles, reviewers made comments in relation to the data analysis. The vast majority of comments stressed that authors provided scarce information about the analysis process. Hence, reviewers requested a more than detailed description of the specific assay techniques employed so that readers can obtain a better understanding of how the analysis was done to gauge the trustworthiness of the findings. To this cease, reviewers frequently requested an explicit statement on whether the analysis was inductive or deductive or iterative or sequential. One reviewer wrote the following comment:

"Please elaborate more than on the qualitative assay. The authors indicate that they used 'iterative' approaches. While this is certainly commendable, it is important to know how they moved from codes to themes (eastward.thousand. inductively? deductively?)" Article #5.

Since there are many approaches to analysing qualitative data, reviewers demanded sufficient detail in relation to the underlying theoretical framework used to develop the coding scheme, the analytic process, the researchers' background (e.grand. profession), the number of coders, data handling, length of interviews and whether information saturation occurred. Over a dozen reviewer comments were specifically in relation to the identification of themes/sub-themes. Reviewers requested a more than detailed description on how the themes/sub-themes were derived from codes and whether they were developed by a second researcher working independently from each other.

"I would have liked to read how their themes were generated, what they were and how they assured robust practices in qualitative data analysis". Commodity #43.

Too that, some reviewers were in the opinion that the approach to assay has led to a surface-level penetration of the data which was reflected in the Results section where themes were underexplored (for more item see "Robust/rich data analysis" beneath). Finally, reviewer comments that occurred infrequently included questions concerning the inter-rater reliability, competing interpretations of data, the employ of figurer software or the original interview language.

Robust/rich data analysis (results)

Amongst the 30 reviewer comments related to this theme/sub-theme, three cardinal facets were observed: (1) greater analytical depth required, (2) suggestions for further assay, and (three) themes are underexplored. In relation to the first bespeak, reviewers requested more in-depth data analysis to strengthen the quality of the manuscript. Reviewers were in the stance that authors reproduced interview information (raw data) in a reduced grade with minimal or no interpretation, thus leaving the interpretation to the reader. Other reviewers referred to manuscripts equally preliminary drafts that demand to exist further analysed to attain greater belittling depth of themes, make links between themes or identify variations betwixt respondents. In relation to the second signal, several reviewers offered suggestions for further analysis. They provided detailed data on how to further explore the data and what additional results they would like to see in the revised version (eastward.g. grouping comparison, gender analysis). The latter aspect goes hand in hand with the third point. Several reviewers pointed out that the findings were shallow, simplistic or superficial at best; lacking the detailed descriptions of circuitous accounts from participants. For example:

"The results of the study are generally descriptive and there is limited analysis. There is also absence of thick description, which one would expect in a qualitative written report". Commodity #34.

Even afterward the first revision, some manuscripts still lacked detailed assay as the following comment from the same reviewer illustrates:

"I believe that the results in the revised version are notwithstanding by and large descriptive and that there is limited assay". Article #34, R1.

Other, less frequently mentioned reviewer comments included lack of deviant cases or absence of relationships between themes.

Themes/sub-themes (results)

In total, there were 24 reviewer comments in relation to themes/sub-themes. More than than half of the comments fell into one of the three categories: (1) themes/sub-themes are not sufficiently supported by data, (2) example/excerpt does not fit the stated theme, and (3) use of insufficient quotes to support theme/sub-theme. In relation to the first category, reviewers largely criticised that the data provided were insufficient to warrant being called a theme. Reviewers requested to provide data "from more than than just one participant" to substantiate a certain theme or criticised that only a short excerpt was provided to support a theme. The 2d category dealt with reviewer comments that questioned whether the excerpts provided really reflected the essence of a theme/sub-theme presented in the results section. The following reviewer comment exemplifies the issue:

"The data themes seem valid, but the data and narratives used to illustrate that don't seem to fit entirely under each sub-heading". Article #99.

Some reviewers provided alternative suggestions on how to call a theme/sub-theme or advised the authors to rethink if excerpts might be better placed nether a dissimilar theme. The third category concerns themes/sub-themes that are not sufficiently supported by participants' quotes. Reviewers perceived direct quotes as evidence to support a sure theme or equally a means to add force to the theme as the following instance illustrates:

"Please provide at to the lowest degree one quote from each school leader and one quote from children to support this theme, if possible. It would seem that almost, if not all, themes should reflect data from each participant grouping". Article #88.

Hence, the absence of quotes prompted reviewers to request at least 1 quote to justify the being of that theme. The inclusion of a rich fix of quotes was perceived as forcefulness of a manuscript. Finally, less frequently raised reviewer comments related to the discrimination of like themes, the presentation of quotes in tables (rather than under the appropriate theme headings), the lack of defining a theme and reducing the number of themes.

Quantitative mindset

Some reviewers who were appointed by journal editors to review a manuscript containing qualitative inquiry evaluated the quality of the manuscript from a perspective of a quantitative research paradigm. Some reviewers non only used terminology that is attuned to quantitative research, but besides their judgements were based on a quantitative mindset. In detail, there were a number of reviewer comments published in BMC Health Services Research, BMC Medical Education and BMC Family Do that demonstrated an apparent lack of understanding of the principles underlying qualitative inquiry of the person providing the review. First, several reviewers seemed to have dislocated the concept of generalisability with the concept of representativeness inherently associated with the positivist tradition. For instance, reviewers erroneously raised concerns about whether interviewees were "representative" of the "concluding target population" and requested the provision of detailed demographic characteristics.

"Need to improve describe how the patients are representative of patients with chronic center failure in kingdom of the netherlands generally. The annunciation that "a representative group of patients were recruited" would benefit from stating what they were representative of." Commodity # 66.

Similarly, another reviewer wanted to know from the authors how they ensured that the qualitative analysis was done objectively.

"The reader would benefit from a detailed description of […] how did the investigators ensure that they were objective in their analysis – objectivity and trustworthiness?" Commodity #22.

Furthermore, despite the fact that the paradigm wars have largely come to an finish, hostility has not ceased on all fronts. In some reviewers the authorization and superiority of the quantitative image over the qualitative prototype is still present every bit the post-obit comment illustrates:

"The main question and methods of this article is largely qualitative and does not seem to have significant implications for clinical do, thus it may not exist suitable to publish in this periodical." Article #45.

Finally, ane reviewer apologised at the outset of the reviewer's report for beingness unable to judge the data analysis due to the absence of sufficient knowledge in qualitative inquiry.

Word

Overall, in this FMRS we found that reviewers maintained a respectful and affirmative rhetoric when providing feedback. Nonetheless, the positive feedback did not overshadow any key negative points that needed to be addressed in social club to increase the quality of the manuscript. However, it should not exist taken for granted that all reviewers are equally courteous and generous as the ones included in our particular review, considering equally Taylor and Bradbury-Jones [36] observed there are many examples where reviewers can be unhelpful and destructive in their comments.

A central finding of this FMRS is that reviewers are more inclined to comment on the writing rather than the methodological rigour of a manuscript. This is a matter of concern, considering Altman [37] – the originator of the EQUATOR (Enhancing the Quality and Transparency of Health Research) Network – has pointed out: "Unless methodology is described the conclusions must be suspect". If nosotros are to advance the quality of qualitative research so we need to encourage clarity and depth in reporting the rigour of research.

When reviewers did annotate on the methodological aspects of an article, issues oftentimes commented on past reviewers were in relation to sampling, data analysis, robust/rich information analysis as reflected in the findings and themes/sub-themes that are insufficiently supported. Considerable piece of work has been undertaken over the by decade trying to meliorate the reporting standards of qualitative enquiry through the dissemination of qualitatively oriented reporting guidelines such as the 'Standards for Reporting Qualitative Research' (SRQR) [38] or the 'Consolidated Criteria for Reporting Qualitative Enquiry' (COREQ) [39] with the aim of improving transparency of qualitative research. Although these guidelines announced to be comprehensive, some of import bug identified in our written report are non mentioned or only dealt with somewhat superficially: sampling for example. Neither COREQ nor SRQR shed light on the ceremoniousness of the sample composition, i.e., to critically question whether all relevant groups of people have been identified every bit potential participants or whether extreme or deviant cases were sought.

Similarly, lack of in-depth data analysis has been identified as another weakness where uninterpreted (raw) data were presented every bit if they were findings. However, existing reporting guidelines are non sharp enough to distinguish between findings and data. While findings are researchers' interpretations of the data they collected, data consist of empirical, uninterpreted cloth researchers offer as their findings [32]. Hence, we advise modifying the current reporting guidelines past including a farther detail to the checklist called "Degree of information transformation". The suggested checklist detail might prompt both authors and reviewers to make a judgment almost the degree to which data have been transformed, i.e., interpretively removed from information as given. The rationale for the new item is to raise authors' and reviewers' awareness for the ceremoniousness of the degree of data transformation in relation to the chosen analysis method. For case, findings derived from content analysis remain shut to the data as they were given to the enquiry; they are often organised into surface nomenclature systems and summarised in brief text. Findings derived from grounded theory, however, should offer a coherent model or line of argument which addresses causality or the primal nature of events or experiences [32].

Likewise that, some reviewers put forward comments that nosotros refer to as aligning with a 'quantitative mindset'. Such reviewers did not announced to sympathize that rather than aspiring to statistical representativeness, in qualitative research participants are selected purposefully for the contribution they can make towards the phenomenon nether study [xl]. Hence, the generalisability of qualitative findings across an immediate group of participants is judged by similarities between the time, place, people or other social contexts [41] rather than in relation to the comparability of the demographic variables. It is the fit of the topic or the comparability of the problem that is of concern [forty].

The majority of issues that reviewers picked up on are already mentioned in reporting guidelines, and so there is no reason why these were omitted by researchers. Many journals now insist on alignment with COREQ criteria, so in that location is an important question to be asked every bit to why this is non always happening. We suggest that completion of an established reporting checklist (e.thou. COREQ, SRQR) on submission becomes a requirement.

In this FMRS we accept made judgements nigh young man peer reviewers and institute their feedback to be constructive, but also, among some, we establish some lack of grasp of the essence of the qualitative attempt. Some reviewers did not seem to understand that objectivity and representative sampling are the antonym of subjectivity, reflexivity and information saturation. We acknowledge though, that private reviewers might have varying levels of feel and competence both in terms of qualitative research, but as well in the reviewing process. We found one reviewer who apologised at the start of the reviewer'south report for being unable to judge the data analysis due to their absenteeism of sufficient knowledge in qualitative enquiry. In line with Spigt and Arts [42], nosotros appreciate the honesty of that reviewer for existence transparent about their skillset. The lessons here we experience are for more than experienced reviewers to offering support and reviewing mentorship to those who are less experienced and for reviewers to emulate the honesty of the reviewer as discussed here, by being open about their capabilities within the review process.

Based on our findings, we have a number of recommendations for both researchers and reviewers. For researchers reporting qualitative studies, we propose that particular attention is paid to reporting of sampling techniques, both in the characteristics and limerick of the sample, and how participants were selected. This is an result that the reviewers in our FMRS picked up on, and then forewarned is forearmed. But it is besides crucially important that sampling matters are not glossed over, so this constitutes adept practise in research reporting as well. 2d, it seems that qualitative researchers do not give sufficient particular almost analytic techniques and underlying theoretical frameworks. The latter has been pointed out before [25], but both these aspects were often the subject of reviewer comments.

Our recommendation for reviewers is only to exist honest. If qualitative inquiry is not an surface area of expertise, and so it is ameliorate to refuse to undertake the review, than to apply a quantitative lens in the assessment of a qualitative slice of work. It is inappropriate to inquire for details about validity and generalisability and shows a lack of respect to qualitative researchers. Nosotros are well beyond the arguments well-nigh quantitative versus qualitative [43]. It is totally appropriate to comment on groundwork and findings and any obvious deficiencies. Finally, our recommendation to editors is a difficult one, because every bit editors ourselves we know how challenging it can be to notice willing reviewers. When selecting reviewers yet, information technology is every bit of import to bear in mind the methodological aspects of an article and its subject, and to select reviewers with appropriate methodological expertise. Some journals arrive a requirement for quantitative manufactures to be reviewed by a statistical proficient and we remember this is skillful exercise. When it comes to qualitative articles all the same, the methodological expertise of reviewers may not exist and then stringently noted and applied. Editors could make a difference here and help to push up the quality of qualitative reviews.

Strengths and weaknesses

Since we had but access to reviewer'due south comments of articles that were finally published in open access journals, we are unable to compare them to types of comments related to rejected submissions. Thus, this study was limited to manuscripts that were sent out for external peer review and were finally published. Furthermore, the called report pattern of analysing only reviewer comments of published articles with an open system of peer review did not allow direct comparison with reviewer comments derived from blind-review.

FMRS provides a snap-shot of a particular consequence at one particular time [23]. To that end, findings might be unlike in another review undertaken in a different time period. However, equally a gimmicky profile of reviewing within qualitative inquiry, the current findings provide useful insights for authors of qualitative reports and reviewers akin. Farther enquiry should focus on comparing reviewer comments taken from an open and airtight system of peer review in order to identify similarities and differences between the 2 models of peer review.

A limitation is that we reviewed open up admission journals considering this was the only way of accessing a range of comments. The alternative that we did consider was to use the feedback provided by reviewers on our ain manuscripts. However, this would have lacked the transparency and traceability associated with this current FMRS, which we consider to be a forcefulness. That said, there may exist an inherent problem in having reviewed open admission peer review comments, where both the author and reviewer are known. Reviewers are unable to 'hide behind' the anonymity of bullheaded peer review and this might reflect, at to the lowest degree in part, why their comments equally analysed for this review were overwhelmingly courteous and constructive. This is at odds with the comments that one of us has received as part of a blind peer review: 'silly, silly, silly' [36].

Conclusions

This FMRS has highlighted some important issues in the field of qualitative reviewing that hold lessons for authors, reviewers and editors. Authors of qualitative reports are chosen upon to follow guidelines on reporting and whatsoever amendments that these might comprise as recommended by the findings of our review. Humility and transparency are required amongst reviewers when it comes to accepting to undertake a review and an honest appraisement of their capabilities in agreement the qualitative endeavour. Journal editors can assist this by thoughtful and judicious pick of reviewers. Ultimately, all those involved with the publication process tin can bulldoze upwards the quality of individual qualitative manufactures and the synergy is such that this can make a pregnant touch on on quality across the field.

Availability of data and materials

The datasets used and/or analysed during the current written report are bachelor from the corresponding author on reasonable request.

Abbreviations

BMC:

BioMed central

BMJ:

British medical journal

COREQ:

Consolidated criteria for reporting qualitative research

EQUATOR:

Enhancing the quality and transparency of wellness research

FMRS:

Focused mapping review and synthesis

IMRaD:

Introduction, methods, results and word

NLTK:

Natural language toolKit

SRQR:

Standards for reporting qualitative research

References

  1. Gannon F. The essential role of peer review (editorial). EMBO Rep. 2001;21(91):743.

    Article  Google Scholar

  2. Mungra P, Webber P. Peer review procedure in medical inquiry publications: language and content comments. Engl Specif Purp. 2010;29:43–53.

    Article  Google Scholar

  3. Turcotte C, Drolet P, Girard M. Study design, originality, and overall consistency influence acceptance or rejection of manuscripts submitted to the journal. Can J Anaesth. 2004;51:549–56.

    Commodity  Google Scholar

  4. Van der Wall EE. Peer review under review: room for improvement? Neth Centre J. 2009;17:187.

    Article  Google Scholar

  5. Burnham JC. The evolution of editorial peer review. JAMA. 1990;263:1323–9.

    CAS  Article  Google Scholar

  6. Baldwin M. Brownie, peer review, and Nature, 1945-1990. Notes Rec R Soc Lond. 2015;69:337–52.

    Article  Google Scholar

  7. Lee CJ, Sugimoto CR, Zhang G, Cronin B. Bias in peer review. J Assoc Inf Sci Technol. 2013;64:2–17.

    Article  Google Scholar

  8. Horbach SPJM, Halffman W. The changing forms and expectations of peer review. Res Integr Peer Rev. 2018;3:8.

    Article  Google Scholar

  9. Oermann MH, Nicoll LH, Chinn PL, Ashton KS, Conklin JL, Edie AH, et al. Quality of manufactures published in predatory nursing journals. Nurs Outlook. 2018;66:four–ten.

    Article  Google Scholar

  10. University of Cambridge. How much do publishers accuse for Open Admission? (2019) https://www.openaccess.cam.ac.u.k./paying-open up-access/how-much-do-publishers-charge-open-admission Accessed 26 Jun 2019.

  11. Elsevier. Open up access journals. (2018) https://www.elsevier.com/nigh/open-scientific discipline/open up-access/open up-admission-journals Accessed 28 Oct 2018.

  12. Peters DP, Ceci SJ. Peer-review practices of psychological journals: the fate of published articles, submitted again. Behav Encephalon Sci. 1982;five:187–95.

    Article  Google Scholar

  13. Ross-Hellauer T. What is open peer review? A systematic review. F1000 Res. 2017;6:588.

    Article  Google Scholar

  14. Smith R. Opening upward BMJ peer review. A beginning that should atomic number 82 to complete transparency. BMJ. 1999;318:iv–5.

    CAS  Article  Google Scholar

  15. Brown HM. Peer review should not exist bearding. BMJ. 2003;326:824.

    Article  Google Scholar

  16. Gosden H. "Thank you for your disquisitional comments and helpful suggestions": compliance and disharmonize in authors' replies to referees' comments in peer reviews of scientific enquiry papers. Iberica. 2001;3:3–17.

    Google Scholar

  17. Swales J. Occluded genres in the academy. In: Mauranen A, Ventola East, editors. Academic writing: intercultural and textual bug. Amsterdam: John Benjamins Publishing Company; 1996. p. 45–58.

    Chapter  Google Scholar

  18. Landkroon AP, Euser AM, Veeken H, Hart West, Overbeke AJ. Quality assessment of reviewers' reports using a simple instrument. Obstet Gynecol. 2006;108:979–85.

    Commodity  Google Scholar

  19. Henly SJ, Dougherty MC. Quality of manuscript reviews in nursing enquiry. Nurs Outlook. 2009;57:eighteen–26.

    Article  Google Scholar

  20. Van Lent Grand, IntHout J, Out HJ. Peer review comments on drug trials submitted to medical journals differ depending on sponsorship, results and acceptance: a retrospective accomplice study. BMJ Open. 2015. https://doi.org/ten.1136/bmjopen-2015-007961.

  21. Davis CH, Bass BL, Behrns KE, Lillemoe KD, Garden OJ, Roh MS, et al. Reviewing the review: a qualitative assessment of the peer review process in surgical journals. Res Integr Peer Rev. 2018;3:iv.

    Article  Google Scholar

  22. Bradbury-Jones C, Breckenridge J, Clark MT, Herber OR, Wagstaff C, Taylor J. The state of qualitative inquiry in health and social science literature: a focused mapping review and synthesis. Int J Soc Res Methodol. 2017;20:627–45.

    Commodity  Google Scholar

  23. Bradbury-Jones C, Breckenridge J, Clark MT, Herber OR, Jones C, Taylor J. Advancing the scientific discipline of literature reviewing in social research: the focused mapping review and synthesis. Int J Soc Res Methodol. 2019. https://doi.org/ten.1080/13645579.2019.1576328.

  24. Taylor J, Bradbury-Jones C, Breckenridge J, Jones C, Herber OR. Gamble of vicarious trauma in nursing research: a focused mapping review and synthesis. J Clin Nurs. 2016;25:2768–77.

    Commodity  Google Scholar

  25. Bradbury-Jones C, Taylor J, Herber OR. How theory is used and articulated in qualitative research: development of a new typology. Soc Sci Med. 2014;120:135–41.

    Commodity  Google Scholar

  26. Platt J. Using journal articles to mensurate the level of quantification in national sociologies. Int JSoc Res Methodol. 2016;19:31–49.

    Commodity  Google Scholar

  27. Braun Five, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3:77–101.

    Article  Google Scholar

  28. Shashok K. Content and advice: how can peer review provide helpful feedback near the writing? BMC Med Res Methodol. 2008;8:three.

    Article  Google Scholar

  29. Hall GM. How to write a newspaper. 2d ed. London: BMJ Publishing Group; 1998.

    Google Scholar

  30. Twenty-four hour period FC, Dl South, Todd C, Wears RL. The utilise of dedicated methodology and statistical reviewers for peer review: a content assay of comments to authors made past methodology and regular reviewers. Ann Emerg Med. 2002;forty:329–33.

    Commodity  Google Scholar

  31. Tashakkori A, Teddlie C. Mixed methodology: combining qualitative and quantitative approaches. London: Sage Publications; 1998.

    Google Scholar

  32. Sandelowski M, Barroso J. Handbook for synthesizing qualitative enquiry. New York: Springer Publishing Visitor; 2007.

    Google Scholar

  33. Jonas K, Crutzen R, Krumeich A, Roman N, van den Borne B, Reddy P. Healthcare workers' beliefs, motivations and behaviours affecting acceptable provision of sexual and reproductive healthcare services to adolescents in Cape Town, South Africa: a qualitative study. BMC Health Serv Res. 2018;18:109.

    Article  Google Scholar

  34. Burgess A, Roberts C, Sureshkumar P, Mossman K. Multiple mini interview (MMI) for general practice training selection in Australia: interviewers' motivation. BMC Med Educ. 2018;18:21.

    Article  Google Scholar

  35. Lee South-Y, Lee EE. Cancer screening in Koreans: a focus group approach. BMC Public Health. 2018;xviii:254.

    Article  Google Scholar

  36. Taylor J, Bradbury-Jones C. Writing a helpful journal review: awarding of the 6 C's. J Clin Nurs. 2014;23:2695–vii.

    Article  Google Scholar

  37. Altman D. My journey to EQUATOR: At that place are no degrees of randomness. EQUATOR Network. 2016 https://world wide web.equator-network.org/2016/02/16/anniversary-blog-series-1/ Accessed 17 Jun 2019.

  38. O'Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med. 2014;89:1245–51.

    Article  Google Scholar

  39. Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-particular checklist for interviews and focus groups. Int J Qual Health Intendance. 2007;19:349–57.

    Article  Google Scholar

  40. Morse JM. Editorial: Qualitative generalizability. Qual Health Res. 1999;9:5–6.

    Article  Google Scholar

  41. Leung L. Validity, reliability, and generalizability in qualitative inquiry. J Family Med Prim Care. 2015;4:324–7.

    Article  Google Scholar

  42. Spigt M, Arts ICW. How to review a manuscript. J Clin Epidemiol. 2010;63:1385–ninety.

    Commodity  Google Scholar

  43. Griffiths P, Norman I. Qualitative or quantitative? Developing and evaluating complex interventions: time to end the prototype state of war. Int J Nurs Stud. 2013;50:583–4.

    Commodity  Google Scholar

Download references

Acknowledgments

The support of Daniel Rütter in compiling data and providing technical support is gratefully acknowledged. Furthermore, we would like to thank Holger Hönings for applying general-purpose programming language to allow for a quantification of reviewer comments in the MS Excel spreadsheet.

Author information

Affiliations

Contributions

All authors have fabricated an intellectual contribution to this research paper. ORH conducted the qualitative analysis and wrote the start draft of the paper. SB, SC, JH, YK, RN and JDV extracted and classified each comment using the classification arrangement. CB-J and JT independently undertook a scale exercise of a random sub-sample of articles (n = 22) to ensure consistency across coders. All co-authors (CB-J, SB, SC, JH, YK, RN, JDV and JT) take input into drafts and accept read and approved the terminal version of the manuscript.

Corresponding writer

Correspondence to Oliver Rudolf HERBER.

Ideals declarations

Ideals approval and consent to participate

Not applicable.

Consent for publication

Non applicable.

Competing interests

The authors declare that they have no competing interests.

Boosted information

Publisher'due south Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary data

Rights and permissions

Open up Admission This commodity is licensed under a Creative Commons Attribution 4.0 International License, which permits utilize, sharing, adaptation, distribution and reproduction in whatever medium or format, every bit long as you lot requite appropriate credit to the original author(s) and the source, provide a link to the Artistic Eatables licence, and indicate if changes were made. The images or other third party fabric in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the cloth. If material is not included in the article's Artistic Eatables licence and your intended utilize is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/iv.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/nix/i.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this commodity

HERBER, O.R., BRADBURY-JONES, C., BÖLING, S. et al. What feedback do reviewers give when reviewing qualitative manuscripts? A focused mapping review and synthesis. BMC Med Res Methodol twenty, 122 (2020). https://doi.org/10.1186/s12874-020-01005-y

Download commendation

  • Received:

  • Accepted:

  • Published:

  • DOI : https://doi.org/x.1186/s12874-020-01005-y

Keywords

  • Open access publishing
  • Journals
  • Peer review
  • Manuscript review
  • reviewer's report
  • Qualitative analysis
  • Qualitative research
  • Synthesis
  • Mapping

speightpareer.blogspot.com

Source: https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-020-01005-y

0 Response to "In Peer Review One Typically Seeks Input From"

Postar um comentário

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel