U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.25(3); 2014 Oct

Logo of ejifcc

Peer Review in Scientific Publications: Benefits, Critiques, & A Survival Guide

Jacalyn kelly.

1 Clinical Biochemistry, Department of Pediatric Laboratory Medicine, The Hospital for Sick Children, University of Toronto, Toronto, Ontario, Canada

Tara Sadeghieh

Khosrow adeli.

2 Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Canada

3 Chair, Communications and Publications Division (CPD), International Federation for Sick Clinical Chemistry (IFCC), Milan, Italy

The authors declare no conflicts of interest regarding publication of this article.

Peer review has been defined as a process of subjecting an author’s scholarly work, research or ideas to the scrutiny of others who are experts in the same field. It functions to encourage authors to meet the accepted high standards of their discipline and to control the dissemination of research data to ensure that unwarranted claims, unacceptable interpretations or personal views are not published without prior expert review. Despite its wide-spread use by most journals, the peer review process has also been widely criticised due to the slowness of the process to publish new findings and due to perceived bias by the editors and/or reviewers. Within the scientific community, peer review has become an essential component of the academic writing process. It helps ensure that papers published in scientific journals answer meaningful research questions and draw accurate conclusions based on professionally executed experimentation. Submission of low quality manuscripts has become increasingly prevalent, and peer review acts as a filter to prevent this work from reaching the scientific community. The major advantage of a peer review process is that peer-reviewed articles provide a trusted form of scientific communication. Since scientific knowledge is cumulative and builds on itself, this trust is particularly important. Despite the positive impacts of peer review, critics argue that the peer review process stifles innovation in experimentation, and acts as a poor screen against plagiarism. Despite its downfalls, there has not yet been a foolproof system developed to take the place of peer review, however, researchers have been looking into electronic means of improving the peer review process. Unfortunately, the recent explosion in online only/electronic journals has led to mass publication of a large number of scientific articles with little or no peer review. This poses significant risk to advances in scientific knowledge and its future potential. The current article summarizes the peer review process, highlights the pros and cons associated with different types of peer review, and describes new methods for improving peer review.

WHAT IS PEER REVIEW AND WHAT IS ITS PURPOSE?

Peer Review is defined as “a process of subjecting an author’s scholarly work, research or ideas to the scrutiny of others who are experts in the same field” ( 1 ). Peer review is intended to serve two primary purposes. Firstly, it acts as a filter to ensure that only high quality research is published, especially in reputable journals, by determining the validity, significance and originality of the study. Secondly, peer review is intended to improve the quality of manuscripts that are deemed suitable for publication. Peer reviewers provide suggestions to authors on how to improve the quality of their manuscripts, and also identify any errors that need correcting before publication.

HISTORY OF PEER REVIEW

The concept of peer review was developed long before the scholarly journal. In fact, the peer review process is thought to have been used as a method of evaluating written work since ancient Greece ( 2 ). The peer review process was first described by a physician named Ishaq bin Ali al-Rahwi of Syria, who lived from 854-931 CE, in his book Ethics of the Physician ( 2 ). There, he stated that physicians must take notes describing the state of their patients’ medical conditions upon each visit. Following treatment, the notes were scrutinized by a local medical council to determine whether the physician had met the required standards of medical care. If the medical council deemed that the appropriate standards were not met, the physician in question could receive a lawsuit from the maltreated patient ( 2 ).

The invention of the printing press in 1453 allowed written documents to be distributed to the general public ( 3 ). At this time, it became more important to regulate the quality of the written material that became publicly available, and editing by peers increased in prevalence. In 1620, Francis Bacon wrote the work Novum Organum, where he described what eventually became known as the first universal method for generating and assessing new science ( 3 ). His work was instrumental in shaping the Scientific Method ( 3 ). In 1665, the French Journal des sçavans and the English Philosophical Transactions of the Royal Society were the first scientific journals to systematically publish research results ( 4 ). Philosophical Transactions of the Royal Society is thought to be the first journal to formalize the peer review process in 1665 ( 5 ), however, it is important to note that peer review was initially introduced to help editors decide which manuscripts to publish in their journals, and at that time it did not serve to ensure the validity of the research ( 6 ). It did not take long for the peer review process to evolve, and shortly thereafter papers were distributed to reviewers with the intent of authenticating the integrity of the research study before publication. The Royal Society of Edinburgh adhered to the following peer review process, published in their Medical Essays and Observations in 1731: “Memoirs sent by correspondence are distributed according to the subject matter to those members who are most versed in these matters. The report of their identity is not known to the author.” ( 7 ). The Royal Society of London adopted this review procedure in 1752 and developed the “Committee on Papers” to review manuscripts before they were published in Philosophical Transactions ( 6 ).

Peer review in the systematized and institutionalized form has developed immensely since the Second World War, at least partly due to the large increase in scientific research during this period ( 7 ). It is now used not only to ensure that a scientific manuscript is experimentally and ethically sound, but also to determine which papers sufficiently meet the journal’s standards of quality and originality before publication. Peer review is now standard practice by most credible scientific journals, and is an essential part of determining the credibility and quality of work submitted.

IMPACT OF THE PEER REVIEW PROCESS

Peer review has become the foundation of the scholarly publication system because it effectively subjects an author’s work to the scrutiny of other experts in the field. Thus, it encourages authors to strive to produce high quality research that will advance the field. Peer review also supports and maintains integrity and authenticity in the advancement of science. A scientific hypothesis or statement is generally not accepted by the academic community unless it has been published in a peer-reviewed journal ( 8 ). The Institute for Scientific Information ( ISI ) only considers journals that are peer-reviewed as candidates to receive Impact Factors. Peer review is a well-established process which has been a formal part of scientific communication for over 300 years.

OVERVIEW OF THE PEER REVIEW PROCESS

The peer review process begins when a scientist completes a research study and writes a manuscript that describes the purpose, experimental design, results, and conclusions of the study. The scientist then submits this paper to a suitable journal that specializes in a relevant research field, a step referred to as pre-submission. The editors of the journal will review the paper to ensure that the subject matter is in line with that of the journal, and that it fits with the editorial platform. Very few papers pass this initial evaluation. If the journal editors feel the paper sufficiently meets these requirements and is written by a credible source, they will send the paper to accomplished researchers in the field for a formal peer review. Peer reviewers are also known as referees (this process is summarized in Figure 1 ). The role of the editor is to select the most appropriate manuscripts for the journal, and to implement and monitor the peer review process. Editors must ensure that peer reviews are conducted fairly, and in an effective and timely manner. They must also ensure that there are no conflicts of interest involved in the peer review process.

An external file that holds a picture, illustration, etc.
Object name is ejifcc-25-227-g001.jpg

Overview of the review process

When a reviewer is provided with a paper, he or she reads it carefully and scrutinizes it to evaluate the validity of the science, the quality of the experimental design, and the appropriateness of the methods used. The reviewer also assesses the significance of the research, and judges whether the work will contribute to advancement in the field by evaluating the importance of the findings, and determining the originality of the research. Additionally, reviewers identify any scientific errors and references that are missing or incorrect. Peer reviewers give recommendations to the editor regarding whether the paper should be accepted, rejected, or improved before publication in the journal. The editor will mediate author-referee discussion in order to clarify the priority of certain referee requests, suggest areas that can be strengthened, and overrule reviewer recommendations that are beyond the study’s scope ( 9 ). If the paper is accepted, as per suggestion by the peer reviewer, the paper goes into the production stage, where it is tweaked and formatted by the editors, and finally published in the scientific journal. An overview of the review process is presented in Figure 1 .

WHO CONDUCTS REVIEWS?

Peer reviews are conducted by scientific experts with specialized knowledge on the content of the manuscript, as well as by scientists with a more general knowledge base. Peer reviewers can be anyone who has competence and expertise in the subject areas that the journal covers. Reviewers can range from young and up-and-coming researchers to old masters in the field. Often, the young reviewers are the most responsive and deliver the best quality reviews, though this is not always the case. On average, a reviewer will conduct approximately eight reviews per year, according to a study on peer review by the Publishing Research Consortium (PRC) ( 7 ). Journals will often have a pool of reviewers with diverse backgrounds to allow for many different perspectives. They will also keep a rather large reviewer bank, so that reviewers do not get burnt out, overwhelmed or time constrained from reviewing multiple articles simultaneously.

WHY DO REVIEWERS REVIEW?

Referees are typically not paid to conduct peer reviews and the process takes considerable effort, so the question is raised as to what incentive referees have to review at all. Some feel an academic duty to perform reviews, and are of the mentality that if their peers are expected to review their papers, then they should review the work of their peers as well. Reviewers may also have personal contacts with editors, and may want to assist as much as possible. Others review to keep up-to-date with the latest developments in their field, and reading new scientific papers is an effective way to do so. Some scientists use peer review as an opportunity to advance their own research as it stimulates new ideas and allows them to read about new experimental techniques. Other reviewers are keen on building associations with prestigious journals and editors and becoming part of their community, as sometimes reviewers who show dedication to the journal are later hired as editors. Some scientists see peer review as a chance to become aware of the latest research before their peers, and thus be first to develop new insights from the material. Finally, in terms of career development, peer reviewing can be desirable as it is often noted on one’s resume or CV. Many institutions consider a researcher’s involvement in peer review when assessing their performance for promotions ( 11 ). Peer reviewing can also be an effective way for a scientist to show their superiors that they are committed to their scientific field ( 5 ).

ARE REVIEWERS KEEN TO REVIEW?

A 2009 international survey of 4000 peer reviewers conducted by the charity Sense About Science at the British Science Festival at the University of Surrey, found that 90% of reviewers were keen to peer review ( 12 ). One third of respondents to the survey said they were happy to review up to five papers per year, and an additional one third of respondents were happy to review up to ten.

HOW LONG DOES IT TAKE TO REVIEW ONE PAPER?

On average, it takes approximately six hours to review one paper ( 12 ), however, this number may vary greatly depending on the content of the paper and the nature of the peer reviewer. One in every 100 participants in the “Sense About Science” survey claims to have taken more than 100 hours to review their last paper ( 12 ).

HOW TO DETERMINE IF A JOURNAL IS PEER REVIEWED

Ulrichsweb is a directory that provides information on over 300,000 periodicals, including information regarding which journals are peer reviewed ( 13 ). After logging into the system using an institutional login (eg. from the University of Toronto), search terms, journal titles or ISSN numbers can be entered into the search bar. The database provides the title, publisher, and country of origin of the journal, and indicates whether the journal is still actively publishing. The black book symbol (labelled ‘refereed’) reveals that the journal is peer reviewed.

THE EVALUATION CRITERIA FOR PEER REVIEW OF SCIENTIFIC PAPERS

As previously mentioned, when a reviewer receives a scientific manuscript, he/she will first determine if the subject matter is well suited for the content of the journal. The reviewer will then consider whether the research question is important and original, a process which may be aided by a literature scan of review articles.

Scientific papers submitted for peer review usually follow a specific structure that begins with the title, followed by the abstract, introduction, methodology, results, discussion, conclusions, and references. The title must be descriptive and include the concept and organism investigated, and potentially the variable manipulated and the systems used in the study. The peer reviewer evaluates if the title is descriptive enough, and ensures that it is clear and concise. A study by the National Association of Realtors (NAR) published by the Oxford University Press in 2006 indicated that the title of a manuscript plays a significant role in determining reader interest, as 72% of respondents said they could usually judge whether an article will be of interest to them based on the title and the author, while 13% of respondents claimed to always be able to do so ( 14 ).

The abstract is a summary of the paper, which briefly mentions the background or purpose, methods, key results, and major conclusions of the study. The peer reviewer assesses whether the abstract is sufficiently informative and if the content of the abstract is consistent with the rest of the paper. The NAR study indicated that 40% of respondents could determine whether an article would be of interest to them based on the abstract alone 60-80% of the time, while 32% could judge an article based on the abstract 80-100% of the time ( 14 ). This demonstrates that the abstract alone is often used to assess the value of an article.

The introduction of a scientific paper presents the research question in the context of what is already known about the topic, in order to identify why the question being studied is of interest to the scientific community, and what gap in knowledge the study aims to fill ( 15 ). The introduction identifies the study’s purpose and scope, briefly describes the general methods of investigation, and outlines the hypothesis and predictions ( 15 ). The peer reviewer determines whether the introduction provides sufficient background information on the research topic, and ensures that the research question and hypothesis are clearly identifiable.

The methods section describes the experimental procedures, and explains why each experiment was conducted. The methods section also includes the equipment and reagents used in the investigation. The methods section should be detailed enough that it can be used it to repeat the experiment ( 15 ). Methods are written in the past tense and in the active voice. The peer reviewer assesses whether the appropriate methods were used to answer the research question, and if they were written with sufficient detail. If information is missing from the methods section, it is the peer reviewer’s job to identify what details need to be added.

The results section is where the outcomes of the experiment and trends in the data are explained without judgement, bias or interpretation ( 15 ). This section can include statistical tests performed on the data, as well as figures and tables in addition to the text. The peer reviewer ensures that the results are described with sufficient detail, and determines their credibility. Reviewers also confirm that the text is consistent with the information presented in tables and figures, and that all figures and tables included are important and relevant ( 15 ). The peer reviewer will also make sure that table and figure captions are appropriate both contextually and in length, and that tables and figures present the data accurately.

The discussion section is where the data is analyzed. Here, the results are interpreted and related to past studies ( 15 ). The discussion describes the meaning and significance of the results in terms of the research question and hypothesis, and states whether the hypothesis was supported or rejected. This section may also provide possible explanations for unusual results and suggestions for future research ( 15 ). The discussion should end with a conclusions section that summarizes the major findings of the investigation. The peer reviewer determines whether the discussion is clear and focused, and whether the conclusions are an appropriate interpretation of the results. Reviewers also ensure that the discussion addresses the limitations of the study, any anomalies in the results, the relationship of the study to previous research, and the theoretical implications and practical applications of the study.

The references are found at the end of the paper, and list all of the information sources cited in the text to describe the background, methods, and/or interpret results. Depending on the citation method used, the references are listed in alphabetical order according to author last name, or numbered according to the order in which they appear in the paper. The peer reviewer ensures that references are used appropriately, cited accurately, formatted correctly, and that none are missing.

Finally, the peer reviewer determines whether the paper is clearly written and if the content seems logical. After thoroughly reading through the entire manuscript, they determine whether it meets the journal’s standards for publication,

and whether it falls within the top 25% of papers in its field ( 16 ) to determine priority for publication. An overview of what a peer reviewer looks for when evaluating a manuscript, in order of importance, is presented in Figure 2 .

An external file that holds a picture, illustration, etc.
Object name is ejifcc-25-227-g002.jpg

How a peer review evaluates a manuscript

To increase the chance of success in the peer review process, the author must ensure that the paper fully complies with the journal guidelines before submission. The author must also be open to criticism and suggested revisions, and learn from mistakes made in previous submissions.

ADVANTAGES AND DISADVANTAGES OF THE DIFFERENT TYPES OF PEER REVIEW

The peer review process is generally conducted in one of three ways: open review, single-blind review, or double-blind review. In an open review, both the author of the paper and the peer reviewer know one another’s identity. Alternatively, in single-blind review, the reviewer’s identity is kept private, but the author’s identity is revealed to the reviewer. In double-blind review, the identities of both the reviewer and author are kept anonymous. Open peer review is advantageous in that it prevents the reviewer from leaving malicious comments, being careless, or procrastinating completion of the review ( 2 ). It encourages reviewers to be open and honest without being disrespectful. Open reviewing also discourages plagiarism amongst authors ( 2 ). On the other hand, open peer review can also prevent reviewers from being honest for fear of developing bad rapport with the author. The reviewer may withhold or tone down their criticisms in order to be polite ( 2 ). This is especially true when younger reviewers are given a more esteemed author’s work, in which case the reviewer may be hesitant to provide criticism for fear that it will damper their relationship with a superior ( 2 ). According to the Sense About Science survey, editors find that completely open reviewing decreases the number of people willing to participate, and leads to reviews of little value ( 12 ). In the aforementioned study by the PRC, only 23% of authors surveyed had experience with open peer review ( 7 ).

Single-blind peer review is by far the most common. In the PRC study, 85% of authors surveyed had experience with single-blind peer review ( 7 ). This method is advantageous as the reviewer is more likely to provide honest feedback when their identity is concealed ( 2 ). This allows the reviewer to make independent decisions without the influence of the author ( 2 ). The main disadvantage of reviewer anonymity, however, is that reviewers who receive manuscripts on subjects similar to their own research may be tempted to delay completing the review in order to publish their own data first ( 2 ).

Double-blind peer review is advantageous as it prevents the reviewer from being biased against the author based on their country of origin or previous work ( 2 ). This allows the paper to be judged based on the quality of the content, rather than the reputation of the author. The Sense About Science survey indicates that 76% of researchers think double-blind peer review is a good idea ( 12 ), and the PRC survey indicates that 45% of authors have had experience with double-blind peer review ( 7 ). The disadvantage of double-blind peer review is that, especially in niche areas of research, it can sometimes be easy for the reviewer to determine the identity of the author based on writing style, subject matter or self-citation, and thus, impart bias ( 2 ).

Masking the author’s identity from peer reviewers, as is the case in double-blind review, is generally thought to minimize bias and maintain review quality. A study by Justice et al. in 1998 investigated whether masking author identity affected the quality of the review ( 17 ). One hundred and eighteen manuscripts were randomized; 26 were peer reviewed as normal, and 92 were moved into the ‘intervention’ arm, where editor quality assessments were completed for 77 manuscripts and author quality assessments were completed for 40 manuscripts ( 17 ). There was no perceived difference in quality between the masked and unmasked reviews. Additionally, the masking itself was often unsuccessful, especially with well-known authors ( 17 ). However, a previous study conducted by McNutt et al. had different results ( 18 ). In this case, blinding was successful 73% of the time, and they found that when author identity was masked, the quality of review was slightly higher ( 18 ). Although Justice et al. argued that this difference was too small to be consequential, their study targeted only biomedical journals, and the results cannot be generalized to journals of a different subject matter ( 17 ). Additionally, there were problems masking the identities of well-known authors, introducing a flaw in the methods. Regardless, Justice et al. concluded that masking author identity from reviewers may not improve review quality ( 17 ).

In addition to open, single-blind and double-blind peer review, there are two experimental forms of peer review. In some cases, following publication, papers may be subjected to post-publication peer review. As many papers are now published online, the scientific community has the opportunity to comment on these papers, engage in online discussions and post a formal review. For example, online publishers PLOS and BioMed Central have enabled scientists to post comments on published papers if they are registered users of the site ( 10 ). Philica is another journal launched with this experimental form of peer review. Only 8% of authors surveyed in the PRC study had experience with post-publication review ( 7 ). Another experimental form of peer review called Dynamic Peer Review has also emerged. Dynamic peer review is conducted on websites such as Naboj, which allow scientists to conduct peer reviews on articles in the preprint media ( 19 ). The peer review is conducted on repositories and is a continuous process, which allows the public to see both the article and the reviews as the article is being developed ( 19 ). Dynamic peer review helps prevent plagiarism as the scientific community will already be familiar with the work before the peer reviewed version appears in print ( 19 ). Dynamic review also reduces the time lag between manuscript submission and publishing. An example of a preprint server is the ‘arXiv’ developed by Paul Ginsparg in 1991, which is used primarily by physicists ( 19 ). These alternative forms of peer review are still un-established and experimental. Traditional peer review is time-tested and still highly utilized. All methods of peer review have their advantages and deficiencies, and all are prone to error.

PEER REVIEW OF OPEN ACCESS JOURNALS

Open access (OA) journals are becoming increasingly popular as they allow the potential for widespread distribution of publications in a timely manner ( 20 ). Nevertheless, there can be issues regarding the peer review process of open access journals. In a study published in Science in 2013, John Bohannon submitted 304 slightly different versions of a fictional scientific paper (written by a fake author, working out of a non-existent institution) to a selected group of OA journals. This study was performed in order to determine whether papers submitted to OA journals are properly reviewed before publication in comparison to subscription-based journals. The journals in this study were selected from the Directory of Open Access Journals (DOAJ) and Biall’s List, a list of journals which are potentially predatory, and all required a fee for publishing ( 21 ). Of the 304 journals, 157 accepted a fake paper, suggesting that acceptance was based on financial interest rather than the quality of article itself, while 98 journals promptly rejected the fakes ( 21 ). Although this study highlights useful information on the problems associated with lower quality publishers that do not have an effective peer review system in place, the article also generalizes the study results to all OA journals, which can be detrimental to the general perception of OA journals. There were two limitations of the study that made it impossible to accurately determine the relationship between peer review and OA journals: 1) there was no control group (subscription-based journals), and 2) the fake papers were sent to a non-randomized selection of journals, resulting in bias.

JOURNAL ACCEPTANCE RATES

Based on a recent survey, the average acceptance rate for papers submitted to scientific journals is about 50% ( 7 ). Twenty percent of the submitted manuscripts that are not accepted are rejected prior to review, and 30% are rejected following review ( 7 ). Of the 50% accepted, 41% are accepted with the condition of revision, while only 9% are accepted without the request for revision ( 7 ).

SATISFACTION WITH THE PEER REVIEW SYSTEM

Based on a recent survey by the PRC, 64% of academics are satisfied with the current system of peer review, and only 12% claimed to be ‘dissatisfied’ ( 7 ). The large majority, 85%, agreed with the statement that ‘scientific communication is greatly helped by peer review’ ( 7 ). There was a similarly high level of support (83%) for the idea that peer review ‘provides control in scientific communication’ ( 7 ).

HOW TO PEER REVIEW EFFECTIVELY

The following are ten tips on how to be an effective peer reviewer as indicated by Brian Lucey, an expert on the subject ( 22 ):

1) Be professional

Peer review is a mutual responsibility among fellow scientists, and scientists are expected, as part of the academic community, to take part in peer review. If one is to expect others to review their work, they should commit to reviewing the work of others as well, and put effort into it.

2) Be pleasant

If the paper is of low quality, suggest that it be rejected, but do not leave ad hominem comments. There is no benefit to being ruthless.

3) Read the invite

When emailing a scientist to ask them to conduct a peer review, the majority of journals will provide a link to either accept or reject. Do not respond to the email, respond to the link.

4) Be helpful

Suggest how the authors can overcome the shortcomings in their paper. A review should guide the author on what is good and what needs work from the reviewer’s perspective.

5) Be scientific

The peer reviewer plays the role of a scientific peer, not an editor for proofreading or decision-making. Don’t fill a review with comments on editorial and typographic issues. Instead, focus on adding value with scientific knowledge and commenting on the credibility of the research conducted and conclusions drawn. If the paper has a lot of typographical errors, suggest that it be professionally proof edited as part of the review.

6) Be timely

Stick to the timeline given when conducting a peer review. Editors track who is reviewing what and when and will know if someone is late on completing a review. It is important to be timely both out of respect for the journal and the author, as well as to not develop a reputation of being late for review deadlines.

7) Be realistic

The peer reviewer must be realistic about the work presented, the changes they suggest and their role. Peer reviewers may set the bar too high for the paper they are editing by proposing changes that are too ambitious and editors must override them.

8) Be empathetic

Ensure that the review is scientific, helpful and courteous. Be sensitive and respectful with word choice and tone in a review.

Remember that both specialists and generalists can provide valuable insight when peer reviewing. Editors will try to get both specialised and general reviewers for any particular paper to allow for different perspectives. If someone is asked to review, the editor has determined they have a valid and useful role to play, even if the paper is not in their area of expertise.

10) Be organised

A review requires structure and logical flow. A reviewer should proofread their review before submitting it for structural, grammatical and spelling errors as well as for clarity. Most publishers provide short guides on structuring a peer review on their website. Begin with an overview of the proposed improvements; then provide feedback on the paper structure, the quality of data sources and methods of investigation used, the logical flow of argument, and the validity of conclusions drawn. Then provide feedback on style, voice and lexical concerns, with suggestions on how to improve.

In addition, the American Physiology Society (APS) recommends in its Peer Review 101 Handout that peer reviewers should put themselves in both the editor’s and author’s shoes to ensure that they provide what both the editor and the author need and expect ( 11 ). To please the editor, the reviewer should ensure that the peer review is completed on time, and that it provides clear explanations to back up recommendations. To be helpful to the author, the reviewer must ensure that their feedback is constructive. It is suggested that the reviewer take time to think about the paper; they should read it once, wait at least a day, and then re-read it before writing the review ( 11 ). The APS also suggests that Graduate students and researchers pay attention to how peer reviewers edit their work, as well as to what edits they find helpful, in order to learn how to peer review effectively ( 11 ). Additionally, it is suggested that Graduate students practice reviewing by editing their peers’ papers and asking a faculty member for feedback on their efforts. It is recommended that young scientists offer to peer review as often as possible in order to become skilled at the process ( 11 ). The majority of students, fellows and trainees do not get formal training in peer review, but rather learn by observing their mentors. According to the APS, one acquires experience through networking and referrals, and should therefore try to strengthen relationships with journal editors by offering to review manuscripts ( 11 ). The APS also suggests that experienced reviewers provide constructive feedback to students and junior colleagues on their peer review efforts, and encourages them to peer review to demonstrate the importance of this process in improving science ( 11 ).

The peer reviewer should only comment on areas of the manuscript that they are knowledgeable about ( 23 ). If there is any section of the manuscript they feel they are not qualified to review, they should mention this in their comments and not provide further feedback on that section. The peer reviewer is not permitted to share any part of the manuscript with a colleague (even if they may be more knowledgeable in the subject matter) without first obtaining permission from the editor ( 23 ). If a peer reviewer comes across something they are unsure of in the paper, they can consult the literature to try and gain insight. It is important for scientists to remember that if a paper can be improved by the expertise of one of their colleagues, the journal must be informed of the colleague’s help, and approval must be obtained for their colleague to read the protected document. Additionally, the colleague must be identified in the confidential comments to the editor, in order to ensure that he/she is appropriately credited for any contributions ( 23 ). It is the job of the reviewer to make sure that the colleague assisting is aware of the confidentiality of the peer review process ( 23 ). Once the review is complete, the manuscript must be destroyed and cannot be saved electronically by the reviewers ( 23 ).

COMMON ERRORS IN SCIENTIFIC PAPERS

When performing a peer review, there are some common scientific errors to look out for. Most of these errors are violations of logic and common sense: these may include contradicting statements, unwarranted conclusions, suggestion of causation when there is only support for correlation, inappropriate extrapolation, circular reasoning, or pursuit of a trivial question ( 24 ). It is also common for authors to suggest that two variables are different because the effects of one variable are statistically significant while the effects of the other variable are not, rather than directly comparing the two variables ( 24 ). Authors sometimes oversee a confounding variable and do not control for it, or forget to include important details on how their experiments were controlled or the physical state of the organisms studied ( 24 ). Another common fault is the author’s failure to define terms or use words with precision, as these practices can mislead readers ( 24 ). Jargon and/or misused terms can be a serious problem in papers. Inaccurate statements about specific citations are also a common occurrence ( 24 ). Additionally, many studies produce knowledge that can be applied to areas of science outside the scope of the original study, therefore it is better for reviewers to look at the novelty of the idea, conclusions, data, and methodology, rather than scrutinize whether or not the paper answered the specific question at hand ( 24 ). Although it is important to recognize these points, when performing a review it is generally better practice for the peer reviewer to not focus on a checklist of things that could be wrong, but rather carefully identify the problems specific to each paper and continuously ask themselves if anything is missing ( 24 ). An extremely detailed description of how to conduct peer review effectively is presented in the paper How I Review an Original Scientific Article written by Frederic G. Hoppin, Jr. It can be accessed through the American Physiological Society website under the Peer Review Resources section.

CRITICISM OF PEER REVIEW

A major criticism of peer review is that there is little evidence that the process actually works, that it is actually an effective screen for good quality scientific work, and that it actually improves the quality of scientific literature. As a 2002 study published in the Journal of the American Medical Association concluded, ‘Editorial peer review, although widely used, is largely untested and its effects are uncertain’ ( 25 ). Critics also argue that peer review is not effective at detecting errors. Highlighting this point, an experiment by Godlee et al. published in the British Medical Journal (BMJ) inserted eight deliberate errors into a paper that was nearly ready for publication, and then sent the paper to 420 potential reviewers ( 7 ). Of the 420 reviewers that received the paper, 221 (53%) responded, the average number of errors spotted by reviewers was two, no reviewer spotted more than five errors, and 35 reviewers (16%) did not spot any.

Another criticism of peer review is that the process is not conducted thoroughly by scientific conferences with the goal of obtaining large numbers of submitted papers. Such conferences often accept any paper sent in, regardless of its credibility or the prevalence of errors, because the more papers they accept, the more money they can make from author registration fees ( 26 ). This misconduct was exposed in 2014 by three MIT graduate students by the names of Jeremy Stribling, Dan Aguayo and Maxwell Krohn, who developed a simple computer program called SCIgen that generates nonsense papers and presents them as scientific papers ( 26 ). Subsequently, a nonsense SCIgen paper submitted to a conference was promptly accepted. Nature recently reported that French researcher Cyril Labbé discovered that sixteen SCIgen nonsense papers had been used by the German academic publisher Springer ( 26 ). Over 100 nonsense papers generated by SCIgen were published by the US Institute of Electrical and Electronic Engineers (IEEE) ( 26 ). Both organisations have been working to remove the papers. Labbé developed a program to detect SCIgen papers and has made it freely available to ensure publishers and conference organizers do not accept nonsense work in the future. It is available at this link: http://scigendetect.on.imag.fr/main.php ( 26 ).

Additionally, peer review is often criticized for being unable to accurately detect plagiarism. However, many believe that detecting plagiarism cannot practically be included as a component of peer review. As explained by Alice Tuff, development manager at Sense About Science, ‘The vast majority of authors and reviewers think peer review should detect plagiarism (81%) but only a minority (38%) think it is capable. The academic time involved in detecting plagiarism through peer review would cause the system to grind to a halt’ ( 27 ). Publishing house Elsevier began developing electronic plagiarism tools with the help of journal editors in 2009 to help improve this issue ( 27 ).

It has also been argued that peer review has lowered research quality by limiting creativity amongst researchers. Proponents of this view claim that peer review has repressed scientists from pursuing innovative research ideas and bold research questions that have the potential to make major advances and paradigm shifts in the field, as they believe that this work will likely be rejected by their peers upon review ( 28 ). Indeed, in some cases peer review may result in rejection of innovative research, as some studies may not seem particularly strong initially, yet may be capable of yielding very interesting and useful developments when examined under different circumstances, or in the light of new information ( 28 ). Scientists that do not believe in peer review argue that the process stifles the development of ingenious ideas, and thus the release of fresh knowledge and new developments into the scientific community.

Another issue that peer review is criticized for, is that there are a limited number of people that are competent to conduct peer review compared to the vast number of papers that need reviewing. An enormous number of papers published (1.3 million papers in 23,750 journals in 2006), but the number of competent peer reviewers available could not have reviewed them all ( 29 ). Thus, people who lack the required expertise to analyze the quality of a research paper are conducting reviews, and weak papers are being accepted as a result. It is now possible to publish any paper in an obscure journal that claims to be peer-reviewed, though the paper or journal itself could be substandard ( 29 ). On a similar note, the US National Library of Medicine indexes 39 journals that specialize in alternative medicine, and though they all identify themselves as “peer-reviewed”, they rarely publish any high quality research ( 29 ). This highlights the fact that peer review of more controversial or specialized work is typically performed by people who are interested and hold similar views or opinions as the author, which can cause bias in their review. For instance, a paper on homeopathy is likely to be reviewed by fellow practicing homeopaths, and thus is likely to be accepted as credible, though other scientists may find the paper to be nonsense ( 29 ). In some cases, papers are initially published, but their credibility is challenged at a later date and they are subsequently retracted. Retraction Watch is a website dedicated to revealing papers that have been retracted after publishing, potentially due to improper peer review ( 30 ).

Additionally, despite its many positive outcomes, peer review is also criticized for being a delay to the dissemination of new knowledge into the scientific community, and as an unpaid-activity that takes scientists’ time away from activities that they would otherwise prioritize, such as research and teaching, for which they are paid ( 31 ). As described by Eva Amsen, Outreach Director for F1000Research, peer review was originally developed as a means of helping editors choose which papers to publish when journals had to limit the number of papers they could print in one issue ( 32 ). However, nowadays most journals are available online, either exclusively or in addition to print, and many journals have very limited printing runs ( 32 ). Since there are no longer page limits to journals, any good work can and should be published. Consequently, being selective for the purpose of saving space in a journal is no longer a valid excuse that peer reviewers can use to reject a paper ( 32 ). However, some reviewers have used this excuse when they have personal ulterior motives, such as getting their own research published first.

RECENT INITIATIVES TOWARDS IMPROVING PEER REVIEW

F1000Research was launched in January 2013 by Faculty of 1000 as an open access journal that immediately publishes papers (after an initial check to ensure that the paper is in fact produced by a scientist and has not been plagiarised), and then conducts transparent post-publication peer review ( 32 ). F1000Research aims to prevent delays in new science reaching the academic community that are caused by prolonged publication times ( 32 ). It also aims to make peer reviewing more fair by eliminating any anonymity, which prevents reviewers from delaying the completion of a review so they can publish their own similar work first ( 32 ). F1000Research offers completely open peer review, where everything is published, including the name of the reviewers, their review reports, and the editorial decision letters ( 32 ).

PeerJ was founded by Jason Hoyt and Peter Binfield in June 2012 as an open access, peer reviewed scholarly journal for the Biological and Medical Sciences ( 33 ). PeerJ selects articles to publish based only on scientific and methodological soundness, not on subjective determinants of ‘impact ’, ‘novelty’ or ‘interest’ ( 34 ). It works on a “lifetime publishing plan” model which charges scientists for publishing plans that give them lifetime rights to publish with PeerJ, rather than charging them per publication ( 34 ). PeerJ also encourages open peer review, and authors are given the option to post the full peer review history of their submission with their published article ( 34 ). PeerJ also offers a pre-print review service called PeerJ Pre-prints, in which paper drafts are reviewed before being sent to PeerJ to publish ( 34 ).

Rubriq is an independent peer review service designed by Shashi Mudunuri and Keith Collier to improve the peer review system ( 35 ). Rubriq is intended to decrease redundancy in the peer review process so that the time lost in redundant reviewing can be put back into research ( 35 ). According to Keith Collier, over 15 million hours are lost each year to redundant peer review, as papers get rejected from one journal and are subsequently submitted to a less prestigious journal where they are reviewed again ( 35 ). Authors often have to submit their manuscript to multiple journals, and are often rejected multiple times before they find the right match. This process could take months or even years ( 35 ). Rubriq makes peer review portable in order to help authors choose the journal that is best suited for their manuscript from the beginning, thus reducing the time before their paper is published ( 35 ). Rubriq operates under an author-pay model, in which the author pays a fee and their manuscript undergoes double-blind peer review by three expert academic reviewers using a standardized scorecard ( 35 ). The majority of the author’s fee goes towards a reviewer honorarium ( 35 ). The papers are also screened for plagiarism using iThenticate ( 35 ). Once the manuscript has been reviewed by the three experts, the most appropriate journal for submission is determined based on the topic and quality of the paper ( 35 ). The paper is returned to the author in 1-2 weeks with the Rubriq Report ( 35 ). The author can then submit their paper to the suggested journal with the Rubriq Report attached. The Rubriq Report will give the journal editors a much stronger incentive to consider the paper as it shows that three experts have recommended the paper to them ( 35 ). Rubriq also has its benefits for reviewers; the Rubriq scorecard gives structure to the peer review process, and thus makes it consistent and efficient, which decreases time and stress for the reviewer. Reviewers also receive feedback on their reviews and most significantly, they are compensated for their time ( 35 ). Journals also benefit, as they receive pre-screened papers, reducing the number of papers sent to their own reviewers, which often end up rejected ( 35 ). This can reduce reviewer fatigue, and allow only higher-quality articles to be sent to their peer reviewers ( 35 ).

According to Eva Amsen, peer review and scientific publishing are moving in a new direction, in which all papers will be posted online, and a post-publication peer review will take place that is independent of specific journal criteria and solely focused on improving paper quality ( 32 ). Journals will then choose papers that they find relevant based on the peer reviews and publish those papers as a collection ( 32 ). In this process, peer review and individual journals are uncoupled ( 32 ). In Keith Collier’s opinion, post-publication peer review is likely to become more prevalent as a complement to pre-publication peer review, but not as a replacement ( 35 ). Post-publication peer review will not serve to identify errors and fraud but will provide an additional measurement of impact ( 35 ). Collier also believes that as journals and publishers consolidate into larger systems, there will be stronger potential for “cascading” and shared peer review ( 35 ).

CONCLUDING REMARKS

Peer review has become fundamental in assisting editors in selecting credible, high quality, novel and interesting research papers to publish in scientific journals and to ensure the correction of any errors or issues present in submitted papers. Though the peer review process still has some flaws and deficiencies, a more suitable screening method for scientific papers has not yet been proposed or developed. Researchers have begun and must continue to look for means of addressing the current issues with peer review to ensure that it is a full-proof system that ensures only quality research papers are released into the scientific community.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Starting the research process

A Beginner's Guide to Starting the Research Process

Research process steps

When you have to write a thesis or dissertation , it can be hard to know where to begin, but there are some clear steps you can follow.

The research process often begins with a very broad idea for a topic you’d like to know more about. You do some preliminary research to identify a  problem . After refining your research questions , you can lay out the foundations of your research design , leading to a proposal that outlines your ideas and plans.

This article takes you through the first steps of the research process, helping you narrow down your ideas and build up a strong foundation for your research project.

Table of contents

Step 1: choose your topic, step 2: identify a problem, step 3: formulate research questions, step 4: create a research design, step 5: write a research proposal, other interesting articles.

First you have to come up with some ideas. Your thesis or dissertation topic can start out very broad. Think about the general area or field you’re interested in—maybe you already have specific research interests based on classes you’ve taken, or maybe you had to consider your topic when applying to graduate school and writing a statement of purpose .

Even if you already have a good sense of your topic, you’ll need to read widely to build background knowledge and begin narrowing down your ideas. Conduct an initial literature review to begin gathering relevant sources. As you read, take notes and try to identify problems, questions, debates, contradictions and gaps. Your aim is to narrow down from a broad area of interest to a specific niche.

Make sure to consider the practicalities: the requirements of your programme, the amount of time you have to complete the research, and how difficult it will be to access sources and data on the topic. Before moving onto the next stage, it’s a good idea to discuss the topic with your thesis supervisor.

>>Read more about narrowing down a research topic

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

article the before research

So you’ve settled on a topic and found a niche—but what exactly will your research investigate, and why does it matter? To give your project focus and purpose, you have to define a research problem .

The problem might be a practical issue—for example, a process or practice that isn’t working well, an area of concern in an organization’s performance, or a difficulty faced by a specific group of people in society.

Alternatively, you might choose to investigate a theoretical problem—for example, an underexplored phenomenon or relationship, a contradiction between different models or theories, or an unresolved debate among scholars.

To put the problem in context and set your objectives, you can write a problem statement . This describes who the problem affects, why research is needed, and how your research project will contribute to solving it.

>>Read more about defining a research problem

Next, based on the problem statement, you need to write one or more research questions . These target exactly what you want to find out. They might focus on describing, comparing, evaluating, or explaining the research problem.

A strong research question should be specific enough that you can answer it thoroughly using appropriate qualitative or quantitative research methods. It should also be complex enough to require in-depth investigation, analysis, and argument. Questions that can be answered with “yes/no” or with easily available facts are not complex enough for a thesis or dissertation.

In some types of research, at this stage you might also have to develop a conceptual framework and testable hypotheses .

>>See research question examples

The research design is a practical framework for answering your research questions. It involves making decisions about the type of data you need, the methods you’ll use to collect and analyze it, and the location and timescale of your research.

There are often many possible paths you can take to answering your questions. The decisions you make will partly be based on your priorities. For example, do you want to determine causes and effects, draw generalizable conclusions, or understand the details of a specific context?

You need to decide whether you will use primary or secondary data and qualitative or quantitative methods . You also need to determine the specific tools, procedures, and materials you’ll use to collect and analyze your data, as well as your criteria for selecting participants or sources.

>>Read more about creating a research design

Finally, after completing these steps, you are ready to complete a research proposal . The proposal outlines the context, relevance, purpose, and plan of your research.

As well as outlining the background, problem statement, and research questions, the proposal should also include a literature review that shows how your project will fit into existing work on the topic. The research design section describes your approach and explains exactly what you will do.

You might have to get the proposal approved by your supervisor before you get started, and it will guide the process of writing your thesis or dissertation.

>>Read more about writing a research proposal

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

Methodology

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

Is this article helpful?

Other students also liked.

  • Writing Strong Research Questions | Criteria & Examples

What Is a Research Design | Types, Guide & Examples

  • How to Write a Research Proposal | Examples & Templates

More interesting articles

  • 10 Research Question Examples to Guide Your Research Project
  • How to Choose a Dissertation Topic | 8 Steps to Follow
  • How to Define a Research Problem | Ideas & Examples
  • How to Write a Problem Statement | Guide & Examples
  • Relevance of Your Dissertation Topic | Criteria & Tips
  • Research Objectives | Definition & Examples
  • What Is a Fishbone Diagram? | Templates & Examples
  • What Is Root Cause Analysis? | Definition & Examples

"I thought AI Proofreading was useless but.."

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

  • USU Library

Articles: Finding (and Identifying) Peer-Reviewed Articles: What is Peer Review?

  • What is Peer Review?
  • Finding Peer Reviewed Articles
  • Databases That Can Determine Peer Review

Peer Review in 3 Minutes

What is "Peer-Review"?

What are they.

Scholarly articles are papers that describe a research study. 

Why are scholarly articles useful?

They report original research projects that have been reviewed by other experts before they are accepted for publication, so you can reasonably be assured that they contain valid information. 

How do you identify scholarly or peer-reviewed articles?

  • They are usually fairly lengthy - most likely at least 7-10 pages
  • The authors and their credentials should be identified, at least the company or university where the author is employed
  • There is usually a list of References or Works Cited at the end of the paper, listing the sources that the authors used in their research

How do you find them? 

Some of the library's databases contain scholarly articles, either exclusively or in combination with other types of articles. 

Google Scholar is another option for searching for scholarly articles. 

Know the Difference Between Scholarly and Popular Journals/Magazines

Peer reviewed articles are found in scholarly journals.  The checklist below can help you determine if what you are looking at is peer reviewed or scholarly.

  • Both kinds of journals and magazines can be useful sources of information.
  • Popular magazines and newspapers are good for overviews, recent news, first-person accounts, and opinions about a topic.
  • Scholarly journals, often called scientific or peer-reviewed journals, are good sources of actual studies or research conducted about a particular topic. They go through a process of review by experts, so the information is usually highly reliable.
Author is an expert on the specific topic of the article Author is usually a journalists who might or might not have particular expertise in the topic
Articles are "peer-reviewed" or evaluated by experts in the field Reviewed by an editor and fact checker.
A list of references or citations appears at the end of the article References usually aren't formally cited
Goal is to present results of research Goal may be to inform, entertain, or persuade
Examples: ; Examples: ;

Profile Photo

  • Next: Finding Peer Reviewed Articles >>
  • Last Updated: May 21, 2024 8:45 AM
  • URL: https://libguides.usu.edu/peer-review

Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License

Back Home

  • Science Notes Posts
  • Contact Science Notes
  • Todd Helmenstine Biography
  • Anne Helmenstine Biography
  • Free Printable Periodic Tables (PDF and PNG)
  • Periodic Table Wallpapers
  • Interactive Periodic Table
  • Periodic Table Posters
  • Science Experiments for Kids
  • How to Grow Crystals
  • Chemistry Projects
  • Fire and Flames Projects
  • Holiday Science
  • Chemistry Problems With Answers
  • Physics Problems
  • Unit Conversion Example Problems
  • Chemistry Worksheets
  • Biology Worksheets
  • Periodic Table Worksheets
  • Physical Science Worksheets
  • Science Lab Worksheets
  • My Amazon Books

Understanding Peer Review in Science

Peer Review Process

Peer review is an essential element of the scientific publishing process that helps ensure that research articles are evaluated, critiqued, and improved before release into the academic community. Take a look at the significance of peer review in scientific publications, the typical steps of the process, and and how to approach peer review if you are asked to assess a manuscript.

What Is Peer Review?

Peer review is the evaluation of work by peers, who are people with comparable experience and competency. Peers assess each others’ work in educational settings, in professional settings, and in the publishing world. The goal of peer review is improving quality, defining and maintaining standards, and helping people learn from one another.

In the context of scientific publication, peer review helps editors determine which submissions merit publication and improves the quality of manuscripts prior to their final release.

Types of Peer Review for Manuscripts

There are three main types of peer review:

  • Single-blind review: The reviewers know the identities of the authors, but the authors do not know the identities of the reviewers.
  • Double-blind review: Both the authors and reviewers remain anonymous to each other.
  • Open peer review: The identities of both the authors and reviewers are disclosed, promoting transparency and collaboration.

There are advantages and disadvantages of each method. Anonymous reviews reduce bias but reduce collaboration, while open reviews are more transparent, but increase bias.

Key Elements of Peer Review

Proper selection of a peer group improves the outcome of the process:

  • Expertise : Reviewers should possess adequate knowledge and experience in the relevant field to provide constructive feedback.
  • Objectivity : Reviewers assess the manuscript impartially and without personal bias.
  • Confidentiality : The peer review process maintains confidentiality to protect intellectual property and encourage honest feedback.
  • Timeliness : Reviewers provide feedback within a reasonable timeframe to ensure timely publication.

Steps of the Peer Review Process

The typical peer review process for scientific publications involves the following steps:

  • Submission : Authors submit their manuscript to a journal that aligns with their research topic.
  • Editorial assessment : The journal editor examines the manuscript and determines whether or not it is suitable for publication. If it is not, the manuscript is rejected.
  • Peer review : If it is suitable, the editor sends the article to peer reviewers who are experts in the relevant field.
  • Reviewer feedback : Reviewers provide feedback, critique, and suggestions for improvement.
  • Revision and resubmission : Authors address the feedback and make necessary revisions before resubmitting the manuscript.
  • Final decision : The editor makes a final decision on whether to accept or reject the manuscript based on the revised version and reviewer comments.
  • Publication : If accepted, the manuscript undergoes copyediting and formatting before being published in the journal.

Pros and Cons

While the goal of peer review is improving the quality of published research, the process isn’t without its drawbacks.

  • Quality assurance : Peer review helps ensure the quality and reliability of published research.
  • Error detection : The process identifies errors and flaws that the authors may have overlooked.
  • Credibility : The scientific community generally considers peer-reviewed articles to be more credible.
  • Professional development : Reviewers can learn from the work of others and enhance their own knowledge and understanding.
  • Time-consuming : The peer review process can be lengthy, delaying the publication of potentially valuable research.
  • Bias : Personal biases of reviews impact their evaluation of the manuscript.
  • Inconsistency : Different reviewers may provide conflicting feedback, making it challenging for authors to address all concerns.
  • Limited effectiveness : Peer review does not always detect significant errors or misconduct.
  • Poaching : Some reviewers take an idea from a submission and gain publication before the authors of the original research.

Steps for Conducting Peer Review of an Article

Generally, an editor provides guidance when you are asked to provide peer review of a manuscript. Here are typical steps of the process.

  • Accept the right assignment: Accept invitations to review articles that align with your area of expertise to ensure you can provide well-informed feedback.
  • Manage your time: Allocate sufficient time to thoroughly read and evaluate the manuscript, while adhering to the journal’s deadline for providing feedback.
  • Read the manuscript multiple times: First, read the manuscript for an overall understanding of the research. Then, read it more closely to assess the details, methodology, results, and conclusions.
  • Evaluate the structure and organization: Check if the manuscript follows the journal’s guidelines and is structured logically, with clear headings, subheadings, and a coherent flow of information.
  • Assess the quality of the research: Evaluate the research question, study design, methodology, data collection, analysis, and interpretation. Consider whether the methods are appropriate, the results are valid, and the conclusions are supported by the data.
  • Examine the originality and relevance: Determine if the research offers new insights, builds on existing knowledge, and is relevant to the field.
  • Check for clarity and consistency: Review the manuscript for clarity of writing, consistent terminology, and proper formatting of figures, tables, and references.
  • Identify ethical issues: Look for potential ethical concerns, such as plagiarism, data fabrication, or conflicts of interest.
  • Provide constructive feedback: Offer specific, actionable, and objective suggestions for improvement, highlighting both the strengths and weaknesses of the manuscript. Don’t be mean.
  • Organize your review: Structure your review with an overview of your evaluation, followed by detailed comments and suggestions organized by section (e.g., introduction, methods, results, discussion, and conclusion).
  • Be professional and respectful: Maintain a respectful tone in your feedback, avoiding personal criticism or derogatory language.
  • Proofread your review: Before submitting your review, proofread it for typos, grammar, and clarity.
  • Couzin-Frankel J (September 2013). “Biomedical publishing. Secretive and subjective, peer review proves resistant to study”. Science . 341 (6152): 1331. doi: 10.1126/science.341.6152.1331
  • Lee, Carole J.; Sugimoto, Cassidy R.; Zhang, Guo; Cronin, Blaise (2013). “Bias in peer review”. Journal of the American Society for Information Science and Technology. 64 (1): 2–17. doi: 10.1002/asi.22784
  • Slavov, Nikolai (2015). “Making the most of peer review”. eLife . 4: e12708. doi: 10.7554/eLife.12708
  • Spier, Ray (2002). “The history of the peer-review process”. Trends in Biotechnology . 20 (8): 357–8. doi: 10.1016/S0167-7799(02)01985-6
  • Squazzoni, Flaminio; Brezis, Elise; Marušić, Ana (2017). “Scientometrics of peer review”. Scientometrics . 113 (1): 501–502. doi: 10.1007/s11192-017-2518-4

Related Posts

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Published: 15 November 2021

The past, present and future of Registered Reports

  • Christopher D. Chambers   ORCID: orcid.org/0000-0001-6058-4114 1 &
  • Loukia Tzavella   ORCID: orcid.org/0000-0002-1463-9396 1  

Nature Human Behaviour volume  6 ,  pages 29–42 ( 2022 ) Cite this article

23k Accesses

154 Citations

180 Altmetric

Metrics details

Registered Reports are a form of empirical publication in which study proposals are peer reviewed and pre-accepted before research is undertaken. By deciding which articles are published based on the question, theory and methods, Registered Reports offer a remedy for a range of reporting and publication biases. Here, we reflect on the history, progress and future prospects of the Registered Reports initiative and offer practical guidance for authors, reviewers and editors. We review early evidence that Registered Reports are working as intended, while at the same time acknowledging that they are not a universal solution for irreproducibility. We also consider how the policies and practices surrounding Registered Reports are changing, or must change in the future, to address limitations and adapt to new challenges. We conclude that Registered Reports are promoting reproducibility, transparency and self-correction across disciplines and may help reshape how society evaluates research and researchers.

Similar content being viewed by others

article the before research

Reducing bias, increasing transparency and calibrating confidence with preregistration

article the before research

Initial evidence of research quality of registered reports compared with the standard publishing model

article the before research

Eight problems with literature reviews and how to fix them

After more than a decade of meta-research and debate, the life and social sciences are well in the midst of a credibility revolution 1 , 2 , 3 . Faced with evidence of publication bias 4 , 5 , 6 , 7 , hindsight bias and selective reporting 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , insufficient sample sizes 16 , inadequate data sharing 17 and suboptimal rates of both attempted 18 , 19 and successful replication 20 , 21 , 22 , researchers from across a broad range of fields are unifying around a core mission to improve reproducibility and transparency. In doing so, the deeper aim of the openness agenda is to stimulate cultural reform, aligning what is beneficial for individual scientists with what is beneficial for science 23 .

As scientists and policymakers grapple with the causes of irreproducibility, it has become clear that one of its main drivers is the so-called ‘results paradox’. On the one hand, scientists are taught from their earliest years that the one part of the research process that they must keep at arm’s length is the results of their research. The objective investigator—the detective—follows the data with discipline and restraint, never pressuring it to bend to their will, lest they fall prey to Richard Feynman’s famous warning that “the first principle is that you must not fool yourself and you are the easiest person to fool” (p. 12) 24 . On the other hand, the very same researcher is sent a message from prestigious journals, funding agencies and evaluation committees that if you want to succeed in science, be sure to publish a lot of clear, novel, positive findings. Researchers are therefore presented with conflicting goals: be a good detective who never indulges in data massaging or cherry picking, but also be a good lawyer who wins arguments and produces a continual supply of beautiful results 25 .

Many observed problems with reproducibility stem from researchers attempting to resolve this paradox while protecting their careers. When publishing in prestigious journals requires confirming one’s hypotheses but the results defy expectations, researchers can resolve this conflict by following the advice of Bem 26 , 27 and rewriting their hypotheses to ‘predict’ those results—a form of hindsight bias known as “hypothesizing after results are known” (HARKing) 8 . When academic leaders tell authors to “go with strongest studies” because “weak data dilute strong data” (p. 79) 28 and that “what you don’t have to do is tell the whole truth ... you can select the results you present” 29 , the responsive researcher answers by reporting the analyses that tell the best story, diverting negative or inconvenient results to the file drawer or converting them into publishable (probably false) positives. And when journal editors tell researchers that, all else being equal, some results are simply more deserving of publication than others, the strategic researcher responds by conducting a large number of small studies and reporting only the most persuasive findings (even if unreliable) rather than gambling on the outcome of larger, more definitive projects that may yield inconclusive data 30 .

Registered Reports (RRs) were proposed in 2012 as a way to free researchers from the pressure to engage in these counterproductive practices, thereby breaking the cycle that perpetuates bias and irreproducibility (Fig. 1 ). The RR model originates from the simple philosophy that to defeat the distorting effects of outcome bias on science, we must focus on the process and blind the evaluation of science to research outcomes 31 . This blinding is achieved by splitting peer review into two stages. In the first stage, authors submit their research question(s), theory, hypotheses, detailed methods and analysis plans and any preliminary data as needed. Following detailed review and revision—usually according to specific criteria—proposals that are favourably assessed receive in principle acceptance (IPA), which commits the journal to publishing the final paper regardless of whether the hypotheses are supported, provided that the authors adhere to their approved protocol and interpret the results in line with the evidence. Following IPA, authors then typically register their approved protocol in a repository, either publicly or under a temporary embargo. Then, after completing the research, they submit a stage 2 manuscript that includes the approved protocol plus the results and discussion, which may include clearly labelled post hoc analyses in addition to the preregistered outcomes (that is, findings from both confirmatory and exploratory analyses). The reviewers from stage 1 and/or newly invited reviewers then assess the completed stage 2 manuscript, focusing on compliance with the protocol and whether the conclusions are justified by the evidence. Crucially, reviewers do not relitigate the theory, hypotheses or methods, thereby preventing knowledge of the results from influencing recommendations. RR guidelines specify that editors similarly cannot reject a manuscript on the basis of any new concerns about the methodology or rationale or on the basis of the results themselves.

figure 1

a , The typical RR workflow involves pre-study review of the study rationale, design and proposed analyses, including preliminary data as needed to provide proof of concept, effect size estimation for a sampling plan or hypothesis generation. Following IPA, authors conduct the research before submitting a complete manuscript with the results. The original reviewers then return at stage 2 to assess compliance with the protocol and to ensure that the conclusions are appropriately evidence-based. b , The distinction between RRs, Registered Replications Reports and study preregistration is a common source of confusion. Aside from the minority of RR formats that do not require public preregistration (see the section “ Limitations and drawbacks ”), RRs are mostly a subset of the wider family of preregistration methods but with the additional features of pre-study review and IPA regardless of results. Registered Replications Reports, offered by one psychology journal, are a subset of RRs. Panel a , reproducing with permission from https://cos.io/rr/ .

The aim of this modified review process is to reduce as much as possible the potential for biased research practices such as HARKing and selective reporting, while also eliminating the incentive for researchers to employ such practices in the first place. RRs are also designed to mitigate publication bias by journals and outcome bias by reviewers, since the decision to accept or reject is made before results are known. Finally, the format is designed to clearly distinguish the outcomes of pre-planned confirmatory research from exploratory data analysis.

In this Review, we take stock of the RR initiative, consider its recent history and historical underpinnings, emerging variants, impacts and limitations, and the probable future of the format into the 2020s and beyond. We also offer guidance to authors, reviewers and editors who are becoming familiar with RRs.

RRs as they exist today were first proposed in 2012 independently and simultaneously at two journals: Cortex and Perspectives on Psychological Science 32 , 33 . The format was then formally offered at these journals and at Social Psychology in 2013 (refs. 34 , 35 , 36 ). These first steps precipitated a gradual rise in adoptions, with now over 300 journals across a range of disciplines offering RRs as a new article type (Fig. 2 ). Early launches triggered the rise of RRs into mainstream publishing, but the origins of the format, and of preregistration in general, are much older. As early as 1878, chemist and logician Charles Peirce laid the foundations for the preregistration of protocols, writing that “[t]he hypothesis should be distinctly put as a question, before making the observations which are to test its truth” (p. 476) 37 . In the mid-twentieth century, psychologist Adriaan de Groot further argued that distinguishing exploratory from confirmatory research was vital for scientific progress, and that “it is a serious offense against the social ethics of science to pass off an exploration as a genuine testing procedure” (see Wagenmakers et al. 38 for a detailed historical overview). Embedded in the arguments of Peirce, de Groot and many others is the maxim that prespecifying predictions and analyses is an important tool for preventing confirmation bias in hypothesis testing.

figure 2

RRs in their current form were first introduced in 2012 and Cortex was the first journal to officially offer RRs as an article type, in 2013. The first RRs were published in the journals Social Psychology ( Soc. Psychol. ) and Perspectives on Psychological Science ( Perspect. Psychol. Sci. ). The first journal to be exclusively dedicated to RRs, Comprehensive Results in Social Psychology ( CRSP ), was also launched in 2014. In 2015, Cortex published its first RR 110 , and Royal Society Open Science ( RSOS ) became the first multidisciplinary journal covering all STEM to offer RRs 111 . In 2015, nine political science journals launched a joint RR project for the 2016 American National Election Studies survey ( https://electionstudies.org/data-center/2016-time-series-study/ ), marking the first application of RRs in political science. The number of RR-adopting journals further increased in 2017, which was a key year on several fronts. As part of the Reproducibility Project: Cancer Biology, eLife published the first of many RRs 112 , and the first RR format for clinical trials was launched by BMC Medicine . The first RR in the field of computer science was also published in RSOS 113 , and the format was introduced for the first time in a specialist ecology journal ( BMC Ecology ). In the same year, Nature Human Behaviour launched RRs 114 , and F1000Research and Meta-Psychology paved the way for the post-publication peer-review model for RRs. The first RR funder/journal partnership was also announced in 2017 (ref. 90 ). By the end of 2018, the number of adopting journals had risen to 150, and the 100th stage 2 RR was published across all journals. This increase in the number of adopters paralleled a major disciplinary expansion, with the format being applied to preclinical science ( BMJ Open Science ), economics ( Journal of Development Economics ), empirical accounting ( Journal of Accounting Research ), animal neurophysiology ( European Journal of Neuroscience ), cancer research ( Cancer Medicine ), immunology, endocrinology, gastroenterology, herpetology and agricultural/soil sciences. In 2018, the British Psychological Society became the first society to launch RRs concurrently across all of its journals 105 . In 2019, PLoS Biology became the 200th adopter of RRs, Nature Human Behaviour published its first two RRs and the format was launched for the first time in the field of veterinary science ( Equine Veterinary Journal ). In 2020, RSOS and 11 journals launched the COVID-19 RR rapid review network 82 . As part of this ongoing initiative, participating journals strive to review stage 1 RRs related to COVID-19 in 7 days and to commit to open access publication with no article processing charges. As a result of this initiative, the past year also marked the first published RR in viral bioinformatics 85 .

Perhaps the earliest proposal for a RR-type review process, in which journal editors reach editorial decisions based on pre-study or results-blind review, was advanced by psychologist Robert Rosenthal who wrote in 1966: “What we may need is a system for evaluating research based only on the procedures employed. If the procedures are judged appropriate, sensible, and sufficiently rigorous to permit conclusions from the results, the research cannot then be judged inconclusive on the basis of the results and rejected by the referees or editors” (p. 36) 39 . Similar ideas were proposed throughout the 1970s and 1980s 40 , 41 , 42 , 43 , 44 but were not widely implemented. Yet, remarkably and unknown to mainstream science, by 1976, the first RR format had already been launched, albeit in the fringe discipline of parapsychology. For 17 years, the European Journal of Parapsychology quietly published RRs alongside regular articles before discontinuing them in 1992 (ref. 45 ).

While non-clinical researchers were debating the potential merits of results-blind review, medical researchers were busy weighing up the costs and benefits of public preregistration to address publication bias, particularly in the context of clinical trials. With the US Food and Drug Administration Modernization Act of 1997 came the first law requiring trial (pre)registration, which in turn led to the launch of the ClinicalTrials.gov registry in 2000. By 2005, the International Committee of Medical Journal Editors was requiring trial registration as a condition of journal publication. An increasing number of journals began offering protocol article types (and in some cases, entire journals), with some performing pre-study review of the protocols (for example, Trials , and The Lancet ’s since-abandoned protocol reviews format 46 ).

Crucially, none of these initiatives, article types or journals provided IPA regardless of the results. Thus, despite decades of debate about preregistration, pre-study review and results-blind acceptance within isolated channels, it would take until the launch of RRs at Cortex and Perspectives on Psychological Science in 2013 for the combined model to take hold. From there, the next 2 years witnessed a gradual increase in the number of psychology journals adopting RRs, followed by the first general science, technology, engineering and mathematics (STEM) journal in 2016 ( Royal Society Open Science ; Fig. 2 ). After a series of major launches (for example, Nature Human Behaviour , BMC Medicine and PLoS Biology ) and broader disciplinary expansion throughout 2017–2018, the RR format permanently entered the mainstream.

At the time of writing, RRs are offered by over 300 journals, with 591 stage 2 articles so far published by 94 adopting outlets. With the format becoming more available and associated with a growing published corpus, the first signs of impact are emerging. In this section, we review some of the early signs of its effectiveness. We also introduce recent variants of the model, summarize the key ingredients that make a high-quality RR, address some major misconceptions (Table 1 ) and genuine limitations that have emerged before offering specific recommendations to authors (Box 1 ), reviewers (Box 2 ) and editors (Box 3 ).

Box 1 Top tips for authors

The planning stage

RRs famously champion the rigour of the proposed methodology and analyses, but the specificity and importance of the research question(s) should also be evaluated during the planning stage. Study plans should first target specific, clear research questions and respective hypotheses (if applicable), on which robust methods and analyses can then be developed.

The feasibility of the proposed study should be considered as early as possible during the planning stage. A well-justified sampling plan that meets journal requirements (for example, 90% statistical power) should be formulated depending on time and resource constraints. Hypotheses and statistical models may need to be adjusted accordingly.

The feasibility and validity of the proposed methods and analyses should be assessed before submission to maximize the efficiency of the peer review process and minimize the need for deviations from the stage 1 protocol following IPA. This can be challenging given that researchers often have not yet acquired data. For this reason, pilot studies or data simulations are highly recommended.

The submission stage

Authors should ensure that there are precise and exhaustive links between each research question, hypothesis, sampling plan, analysis plan and contingent interpretation depending on different outcomes. A RR that minimizes researcher degrees of freedom may receive IPA without the need for major corrections given that the robustness of the methods has been established. The use of pre-study coding protocols can help achieve maximum cohesion (for example, see https://osf.io/6bv27/ ).

Authors should formulate analysis plans that take into account all the steps in data pre-processing (for example, exclusions, cleaning and aggregation) and analyses (for example, statistical model assumptions). If such details are impossible to specify, then authors should propose a more general plan in combination with blinded analysis methods 115 , for instance, as deployed in a recent RR by Dutilh et al. 116 .

The stage 1 protocol should, where possible, include outcome-neutral checks for ensuring that the proposed hypotheses are capable of being tested, such as positive/negative controls, tests of intervention fidelity or data quality checks. These conditions need to be successfully met at stage 2; therefore, authors should carefully consider which, if any, such conditions can be preregistered at stage 1.

Box 2 Top tips for reviewers—questions to ask during stage 1 and stage 2 assessment

Do the research questions and proposed hypotheses make sense in light of the theory or application? Are they defined precisely? Are the hypotheses capable of answering the research question?

Is the protocol sufficiently detailed to enable replication by an expert in the field, and to close off sources of undisclosed procedural or analytic flexibility?

Is there an exact mapping between the theory, scientific hypotheses, sampling plan (for example, power analysis, where applicable), preregistered statistical tests and possible interpretations given different outcomes?

Where relevant, does the power analysis (or alternative sampling plan) reach the minimum threshold required by the journal’s policy (for example, 90% power, Bayes factor > 6)?

Does the sampling plan for each hypothesis propose a realistic and well-justified estimate of the effect size?

Have the authors avoided the common pitfall of relying on conventional null hypothesis significance testing to conclude evidence of absence from null results? Where the authors intend to interpret a negative result as evidence that an effect is absent, have authors proposed an inferential method that is capable of drawing such a conclusion, such as Bayesian hypothesis testing 72 , 117 or frequentist equivalence testing 118 ?

Have the authors minimized all discussion of exploratory analyses apart from those that must be explained to justify specific design features? Maintaining this clear distinction at stage 1 can prevent exploratory analyses at stage 2 being inadvertently presented as pre-planned.

Have the authors clearly distinguished work that has already been done (for example, preliminary studies and data analyses) from work yet to be done?

Have the authors prespecified positive controls, manipulation checks or other data quality checks? If not, have they justified why such tests are either infeasible or unnecessary? Is the design sufficiently well controlled in all other respects?

When proposing positive controls or other data quality checks that rely on inferential testing, have the authors included a statistical sampling plan that meets the minimum requirement in terms of statistical power or evidence strength (if there is one)?

Did the authors formally preregister their stage 1 protocol and have they provided a direct URL to the approved protocol in the stage 2 manuscript? Did they stay true to their protocol? Are any deviations from the protocol clearly justified and fully documented?

Is the introduction section in the stage 1 manuscript (including hypotheses) the same as in the stage 2 manuscript? Are any changes transparently flagged?

Did any prespecified data quality checks or positive controls succeed?

Are any additional post hoc analyses justified, performed appropriately and clearly distinguished from the preregistered analyses? Are the conclusions appropriately centred on the outcomes of the preregistered analyses?

Are the overall conclusions based on the evidence?

Dos and don’ts

Do suggest additional exploratory analyses at stage 2, but do not expect the editor to necessarily require the authors to conduct them. Authors are not obliged to conduct any unregistered analyses unless such tests are necessary to support conclusions that go beyond the preregistered analyses, and these must be conclusions that the authors (not reviewers or editors) wish to draw. This protection exists to prevent goal-post shifting and subtle forms of publication bias from affecting the stage 2 process.

If you find a flaw in the protocol at stage 2 that was missed or unaddressed at stage 1, do mention it but do not expect the manuscript to be rejected on that basis. It is important to remember that the protocol is not generally subject to re-review at stage 2, and editors cannot require authors to do extra studies. Barring rare cases in which the editor, authors and reviewers agree that a severe error was made, the most that will probably happen is that the authors will be asked to address potential design limitations in the discussion section.

Box 3 Advice for journal editors considering offering RRs

Are RRs appropriate for my discipline?

RRs are potentially useful in any field where at least one of the following problems exist: publication bias, hindsight bias (HARKing), selective reporting of desirable results by authors (including but not limited to p -hacking), lack of sufficient sample sizes to draw meaningful conclusions or lack of close replication. Not all fields experience these problems to the same extent. For example, many sciences do not employ null hypothesis significance testing or even inferential statistics. Other fields, particularly in the physical sciences, have a strong culture of replication. Publication bias, however, exists across almost all sciences 4 . Therefore, in considering whether to offer RRs, the first question an editor should ask is: would two studies in my field that addressed the exact same question and were conducted to the exact same methodological standard have the same chance of being published in my field’s most prestigious journals if one study showed clear, persuasive results while the other study showed null or ambiguous results? If the answer is no—and the study showing more compelling results would have a greater chance of being accepted—then RRs should be offered as a means to reduce publication bias.

Can RRs be suitable even when authors are not testing hypotheses?

Yes. It is true that almost all RRs test hypotheses; however, the format can also have value in an observational setting where publication bias could either suppress certain outcomes or encourage biased analysis or interpretation by authors. For example, a marine biologist may be interested in measuring the concentration of fish, plankton and particulate organic matter to assess the overall health of an ecosystem. Even though the researcher has no specific hypotheses, by detailing the data acquisition method, analysis techniques and which outcomes will lead to which conclusions, the researcher can control their own analytical and interpretive bias, and by incorporating pre-study review and IPA, they eliminate the risk of publication bias.

Are RRs suitable only for replication studies?

No. While RRs are well suited for replications (especially when the review criteria are modified to ensure accountability; Supplementary Note ), approximately 50% of the hypotheses in published RRs arise from original studies 48 .

If a journal offers RRs, does the RR format need to replace the traditional research article format at the journal?

No. So far, virtually all new adopters have added RRs as a new article type, with the exception of Comprehensive Results in Social Psychology and a small number of additional journals/platforms that were created as RR-only outlets. Given current limitations in the pace of stage 1 review (see the sections “ Limitations and drawbacks ” and “ Future ”), there will always be a place for hypothesis-driven research outside the RR format in addition to pure exploratory research.

Will RRs lower my journal’s impact factor?

Probably not. Preliminary evidence suggests that RRs are cited the same as or slightly higher than comparable non-RRs 59 . Although impact factor carries a currency with many journals and publishers, the prevailing evidence suggests that it does not contain information about scientific quality 55 , 56 , 57 , 58 .

How should editors triage initial stage 1 submissions?

Many journals assess stage 1 manuscripts according to specific criteria (Supplementary Note ). To maximize the efficiency of the review process, we recommend that editors always perform a desk assessment against these criteria (Box 2 ) to ensure that a submitted manuscript avoids common pitfalls before being sent for specialist peer review.

Would editors be required to accept any methodologically sound protocol, regardless of the importance or relevance of the research question?

No. Many journals assess the subjective value of the research as part of the stage 1 assessment. For instance, at PLoS Biology , the stage 1 criterion 1 is “The importance of the research question(s)” 119 , while Nature Human Behaviour judges “The importance of the research question(s) and relevance for a broad, multidisciplinary audience” 120 . Other journals place less emphasis on such judgements; for example, at Cortex and at Royal Society Open Science , the “importance” of the research question is replaced with an assessment of the “scientific validity of the research question” 121 , 122 . Each journal is free to determine how selective it wishes to be at stage 1. The main requirement is that any such selectivity is applied before the results are known and transparently communicated to authors in the journal’s policy.

Would a journal be obligated to publish the results of a RR that appeared promising at stage 1 but was conducted to a low standard?

No. The stage 1 review process allows reviewers and editors to prespecify positive controls, manipulation checks and data checks for assessing quality of implementation (for example, data verifying that a particular intervention or measure was appropriately administered). To prevent publication bias, the only requirement is that such tests are prespecified at stage 1 before results are known and that they are independent of the primary outcome measures and main study hypotheses. Stage 2 rejection will be very rare; for example, at Cortex , the European Journal of Neuroscience and Royal Society Open Science , where C.D.C. is a RR editor, the stage 2 rejection rate is currently zero.

How complicated and arduous is the process of adding RRs to a journal?

Installing RRs has become increasingly easy over time. With around 300 journals now offering them, all major publishers have at least one adopter under their umbrella. In many cases, the format and workflow can be imported very easily between journals. The Center for Open Science hosts dedicated resources for editors to assist in implementation 123 . Journals can also offer RRs by joining the Peer Community in Registered Reports (PCI RR) and, if they wish, use the review infrastructure of the PCI RR in place of establishing an internal infrastructure ( https://rr.peercommunityin.org/ ).

Do RRs create more work for editors compared with regular articles?

Editing RRs requires careful attention to the study rationale and methodology, ensuring that authors adhere to the journal’s RR policy, guiding them on how to best address (the sometimes competing) recommendations from reviewers and making certain that authors do not feel compelled to comply with requests from reviewers that violate RR policy. An RR editor must therefore be highly engaged with each submission and read the article. Insofar as these practices define the minimum requirements for competent editing, the RR process will produce a comparable editorial workload. At the same time, it is important to note that RR submissions undergo a minimum of two phases of review, at stage 1 and stage 2, whereas a regular article is subjected to a single review process. Whether this bifurcation of review for RRs translates to increased editorial workload is unclear and is likely to depend on the extent to which each RR replaces a regular article that the authors would otherwise have submitted and on the comparative rejection rate of RRs versus regular articles at the journal. If, as observed informally (Table 1 ), the rejection rate for RRs is lower than for regular articles and if each RR replaces at least one regular article that the authors would have submitted, then RRs may still reduce overall editorial workload by reducing the number of sequential journal submissions and therefore the duplication of the review process across journals.

Are RRs possible in fields where researchers analyse existing datasets?

Yes. The majority of adopting journals invite RRs proposing secondary analyses of existing datasets, provided that authors take sufficient steps to minimize risk of bias and analytical overfitting. Such measures could include avoiding all prior observation of the data or key variables, proposing key analyses in an unseen holdout sample or recruiting a blinded analyst.

Could RRs be useful in qualitative research?

Yes. To the extent that publication bias is a concern in both qualitative and quantitative research 124 , RRs hold promise for improving reproducibility. To date, however, only a handful journals currently offer qualitative RRs (for example, BMC Medicine and 13 more; see column 20 at https://tinyurl.com/RRpolicylist ), and none include a specific RR policy text tailored for qualitative methods. The development of qualitative protocols for preregistration and RRs is a key area for future innovation 125 , 126 , 127 , 128 .

Field spread and author demographics

Since their initial launch within psychology and neuroscience, RRs have spread to specialist journals covering a range of disciplines, primarily in the life and social sciences (Supplementary Fig. 1 ). As this reach has grown, we can begin to explore the demographics of submitting authors to discover the accessibility of the format. The prospect of being pre-accepted at a respectable journal is likely to be appealing to many researchers—and perhaps early-career researchers (ECRs) in particular—seeking to eliminate the risk that the results of their research will determine publication and, consequently, their career prospects. However, the often substantial sample sizes needed for RRs to achieve minimum levels of statistical power required by many journals, combined with the time taken for stage 1 review (see the section “ Limitations and drawbacks “), could act as a deterrent, especially for researchers with major resource constraints. To provide a preliminary insight into accessibility, we analysed the author demographics of 141 stage 1 RRs submitted to Cortex , the European Journal of Neuroscience , NeuroImage and Royal Society Open Science . We found that 77% of submitted stage 1 manuscripts were first-authored by PhD students or postdoctoral researchers (Supplementary Fig. 2a ). At the journal Cortex , where a direct comparison between different article types was possible, we found that 78% of submitted RRs were led by ECRs compared with 67% in a comparison sample of regular articles (Supplementary Fig. 2b,c ). It would be premature to conclude that RRs present no barriers for researchers 47 , but these results at least provide no reason to fear that RRs are beyond the reach of ECRs.

Early impacts

Are RRs working as intended to reduce bias and improve reliability? Although the initiative is too young to answer this question with confidence, metascientific investigations are beginning to reveal signs of bias control, study quality, computational reproducibility and citation influence.

Bias control

Since reporting and publication biases typically favour positive results, RRs, if successful, should yield a greater proportion of negative results compared with the conventional literature. So far, this prediction appears to hold: a recent analysis of 296 hypotheses published across 127 RRs in different fields found that 60% of RRs report null results, which is approximately five times greater than the rate in regular articles 48 . In psychology, this difference is even more striking, with a new study 49 finding that just 4% of regular articles failed to confirm the first hypothesis compared with 56% for RRs 4 (see also Wiseman et al. 45 ). It would be tempting to conclude that this increase is caused by the elimination of selective reporting, HARKing and publication bias resulting from pre-study review and IPA, which are the key ingredients of the RR process. However, it is possible that authors, knowing that their study will be published regardless of whether their hypotheses are supported, might employ the RR format to test riskier hypotheses. Moreover, RRs themselves might select for authors who are diligent in controlling their own reporting bias, regardless of the article type. To address such confounds, future observational studies could compare the plausibility of hypotheses in RRs compared with non-RRs, as well as indicators of biased reporting in RRs and non-RRs within the same sets of authors (see the section “ Future ”).

Computational reproducibility

There are several reasons why RRs might be more computationally reproducible than conventional articles. At many journals, the RR review policy has more stringent expectations concerning open data and code, which is associated with greater accuracy in statistical reporting 50 . In addition, IPA eliminates the incentive for authors to conceal messy or inconvenient elements of their data, and early adopters of the format may also be predisposed to performing research to a higher level of transparency. A recent study 51 indeed suggests that the results of RRs can be more readily reproduced from the acquired data compared with regular articles. Of the 35 RRs published in psychology that made data and code openly available, 57% were computationally reproducible compared with 31% in a previous analysis of regular articles 52 . Although RRs appear to perform better than the status quo, these results clearly show room for improvement and require substantially more data to be confirmed.

Citation profile

Clinical trials reporting negative results receive between two and ten times fewer citations than trials reporting positive results 53 , 54 . Given the increased rate of negative results in RRs, authors may therefore be concerned that submitting a RR could be disadvantageous to their careers. Similarly, one of the immediate reactions to RRs from many journal editors is that the format could risk reducing their outlet’s impact factor, a powerful albeit spurious measure of research influence 55 , 56 , 57 , 58 . In fact, such concerns may be unwarranted; a recent analysis 59 of 70 RRs reported in a non-peer-reviewed preprint found that RRs are cited the same as or slightly higher than comparable regular articles.

Study quality

How do expert assessments of RRs compare with non-RRs? In a recent study 60 , Soderberg et al. reported an experiment in which 353 scientists rated a sample of published, partially blinded RRs and non-RRs on 19 study characteristics, including importance, novelty, creativity, innovation and rigour. RRs numerically outperformed non-RRs on every criterion, showing statistically robust and large improvements in attributes such as methodological rigour and overall article quality, while being statistically indistinguishable from comparison papers in terms of features such as novelty and creativity. These results held even among reviewers who admitted being sceptical or neutral about RRs.

Emerging variants

As the reach of RRs has grown, several modified versions of the format have arisen to accommodate specific needs. Five major strands in particular have emerged, including results-blind review, accountable replication policies, RRs involving post-publication peer review, publisher-level RRs and publisher-independent RRs. These variants are briefly summarized below and discussed in more detail in the Supplementary Note .

Results-blind review

In this modified workflow, stage 1 peer review is undertaken after results are known to the authors but before they are known to the reviewers and editors 61 . Following IPA, the authors then submit the full manuscript containing the data and conclusions. Because authors need not wait until IPA to conduct their research, this format prevents the stage 1 review time from delaying data collection and analysis. However, reviewers are unable to improve the study design, and the format does not prevent reporting bias (for example, p -hacking or HARKing) by authors. To date, at least 13 journals, primarily in psychology and management, have adopted results-blind review as an optional article track. Two journals, Cortex and Infant and Child Development , have also launched Verification Reports, a results-blind format dedicated to assessing the computational reproducibility and robustness of previous findings based on a re-analysis of the original study data.

Accountable replications

Conceived by psychologist Sanjay Srivastava 62 , this variant emerged from the principle that when a journal publishes a research finding, it should commit to publishing all methodologically sound replications of that finding regardless of how the results turn out and regardless of subjective importance or methodological flaws in the original study. Using a modified set of the RR assessment criteria, the journal reaches a stage 1 IPA decision on the basis of technical validity and the methodological proximity between the replication and target study (Supplementary Fig. 3 ). To date, Royal Society Open Science is the only journal that implements a complete and fully specified version of this concept, following partial implementations at Clinical Psychological Science , the Journal of Research in Personality and Psychological Science 63 , 64 , 65 , 66 , 67 .

Post-publication peer review RRs

RRs usually rely on conventional pre-publication review in which reviewers serve as gatekeepers to IPA and stage 2 acceptance. In contrast, by combining post-publication peer review with RRs, the stage 1 manuscript is published almost immediately following initial receipt and is then openly reviewed 68 , 69 . If the reviews are positive (with authors having the usual opportunity to revise the protocol), then the article is awarded IPA and, once passing stage 2 review, the final manuscript with results is badged as a RR. To date, this model has been adopted across ten journals, including F1000Research and Wellcome Open Research .

Publisher-level RRs

The review process for RRs is typically managed by the one journal, but recently, some journals have begun implementing a distributed model in which stage 1 and stage 2 manuscripts can be reviewed and published in different journals under the same publisher. In one working model, the completed stage 2 RR is then cross-linked to the accepted stage 1 protocol using an international RR identifier 49 .

Publisher-independent RRs

Can RRs exist beyond journals? As part of the recently created Peer Community in Registered Reports (PCI RR), stage 1 and stage 2 preprints are reviewed independently of journals ( https://rr.peercommunityin.org ). Where the reviews are positive, PCI RR issues a positive recommendation and authors can then choose to publish their recommended preprint in any ‘PCI RR-friendly’ journal without further peer review.

Seven virtues of high-quality RRs

What makes a good RR? In this section, we describe seven desirable characteristics that authors should aim to capture in their stage 1 and stage 2 manuscripts. Further guidance may be found in Box 1 , in the RR policies for specific journals (see the list at https://cos.io/rr/ ) and in a recent practical primer by Kiyonaga and Scimeca 70 .

The first and foremost ingredient is that the proposal tackles a scientifically valid question and ideally one that other scientists agree is important to answer. The introduction section of the stage 1 manuscript should make clear the underlying theory or application from which the question arises, leaving the reader in no doubt as to why the study is being proposed. Second, where the study proposes hypotheses, they should be stated as precisely as possible in terms of specific variables to ensure falsifiability. In quantitative hypothesis-driven sciences, we recommend that researchers consider the open-theory pathway proposed by Guest and Martin 71 to help ensure that hypotheses are formulated as a natural specification of computational theory rather than emerging loosely—and often with questionable rationale—from a vague conceptual framework. Some of the most effective RRs achieve this by identifying pressure points in competing theories and then devising hypotheses to adjudicate between them.

With the theory, rationale and hypotheses in place, the third key ingredient is a study procedure and analysis plan that is as rigorous, transparent and as comprehensive as possible. Data acquisition protocols and analysis plans should be prespecified with sufficient detail to be reproduced by experts in the field, ideally with accompanying code, and with rigorous experimental controls where appropriate, including both negative and positive controls. Where the conclusions will depend on inferential statistics, the procedure should include a detailed sampling plan, such as a statistical power analysis, Bayes factor design analysis 72 or an appropriate alternative, which, crucially, should also make clear the specific hypothesis it interrogates and the rationale for deciding the sensitivity of each statistical test (such as justification of the target effect size or Bayesian prior). When planning the analyses, it is vital to choose the right tools for the job, including assumption checks, detailed consideration of data preprocessing and filtering, and planned contingencies for any data-driven analysis decisions. Where these contingencies would be too numerous (or even impossible) to specify in advance, the inclusion of pilot data at stage 1 can be used to verify assumptions and narrow the range of possibilities. Alternatively, authors can embrace uncertainty and use blinded analysis methods to control risk of bias.

These three features are the essential building blocks for the fourth key ingredient: a seamless link between the research question, theory and its specification, the hypotheses, sampling plan and contingent interpretation given different outcomes (Box 1 ). A stage 1 RR can be thought of as a preparatory chain of inference, leading from the why to the what and how, which, as with any chain, is only as strong as its weakest link. Many RRs now include study design tables to elucidate these links as precisely as possible (for example, see https://osf.io/sbmx9/ ).

Fifth, once the study is completed and results are known, it is essential that the outcomes of the prospective (confirmatory) analyses are clearly distinguished from the outcomes of any post hoc (exploratory) analyses that deviated from the preregistered plans. While RRs are not intended to restrict valid deviation or post hoc exploration, it is vital that at this final hurdle the outcomes that were decided after observing data, and therefore potentially bias-prone, are not conflated with those that were protected from bias by prespecification. Clear differentiation of exploratory and confirmatory outcomes in turn furnishes the sixth key ingredient: ensuring that the conclusions of the RR are based firmly on the evidence presented and appropriately weighted in favour of the confirmatory outcomes. Finally, in line with level 2 of the Transparency and Openness Promotion Guidelines 73 , the seventh key ingredient of a high-quality RR is that study data, code and digital materials are made publicly available to the maximum extent permitted by relevant ethical or legal constraints.

Limitations and drawbacks

Despite their advantages, RRs are neither a panacea nor a one-size-fits all solution for irreproducibility. As a tool for improving the quality of confirmatory research, they are particularly well suited to hypothesis-driven studies and are not designed to improve the robustness or transparency of purely exploratory science (for which better suited article types are available 74 , 75 , 76 ). As the format has evolved, various shortcomings have also been revealed in the workflow and implementation.

Lack of protocol transparency

In 2018, Hardwicke et al. 77 reported that of the 70 journals that had adopted RRs permanently, only 50% required that the accepted stage 1 protocols were publicly registered and available alongside the completed stage 2 articles. Protocol transparency is an important element of RRs because it enables readers to check whether authors followed the approved protocol rather than relying on the (typically) closed review process to ensure compliance 78 . One reason for the lack of protocol transparency is that, in 2013, the key progenitor of RRs, Cortex , did not require accepted stage 1 protocols to be made public, and this policy omission was then duplicated among subsequent adopters. However, since the analysis by Hardwicke et al., the recommended author guidelines for RRs at Cortex and at the Center for Open Science have been updated to include protocol transparency 79 . To date, of the 213 permanent adopters with published RR policies, 87% now either publish the stage 1 protocol as a separate article or require stage 1 protocols to be registered and made public no later than the point of stage 2 acceptance (see Supplementary Note for details, and ref. 80 for a dedicated RR registry supported by the Center for Open Science). It remains an ongoing task to persuade all RR-adopting journals to require protocol transparency, and a key aim of future metascience will be to confirm that journals are enforcing their policies appropriately (Box 4 ).

Box 4 The big questions for metascience

How exactly do RRs differ from regular articles, both quantitatively and qualitatively 129 ? Are the method sections longer, more detailed and more easily repeatable? Are the sample sizes larger, as expected by the high statistical power requirements at many adopting journals? Following Soderberg et al. 60 , do blinded expert raters judge RRs to be of higher quality than comparable regular articles? Are RRs more likely to include open data and materials?

Are the results reported in RRs more likely to be replicated when the studies are independently repeated? Are the outcomes of the confirmatory preregistered analyses in RRs more likely to be replicated than the outcomes of post hoc exploratory analyses?

To what extent do stage 2 RRs deviate from their accepted stage 1 protocols? Are such deviations always explicit?

Is the review process for RRs, on average, more or less efficient than the review process for regular articles? Across all journals, is the acceptance rate higher or lower for RRs?

Using a randomized controlled trial, can we conduct a definitive causal test of the hypothesis that the key ingredients of RRs—pre-study review and IPA—reduce publication bias and reporting bias?

Are RRs more likely to include ‘single shot’ studies (versus sequences of studies) compared with regular articles?

Are RRs having beneficial collateral effects on open research policies or practices? For example, is the adoption of RRs at a journal associated with rising standards of reproducibility and transparency for regular articles within the same journal? Are there any detrimental effects?

Are journal editors adhering to their own RR policies? Are stage 1 or stage 2 manuscripts ever rejected for reasons that violate RR policy? Are journals requiring that stage 1 protocols are formally registered and linked to the stage 2 manuscript?

What are the career costs and benefits of pursuing RRs for individual researchers? Does publishing a RR lead to changes in a researcher’s attitude towards their own work or the work of their field? Are there any signs of RRs influencing career opportunities? Once authors have published their first RR, do the regular articles they subsequently publish demonstrate reduced risk of bias compared with the regular articles that they published in the past?

How do the wider author demographics of RRs compare with the normative demographics in their field? Do any discrepancies vary between fields?

How does the tone and content of peer review differ between RRs and regular articles? Does the fact that stage 1 review offers reviewers the opportunity to improve a study design lead to a more constructive and collegial process, as suggested by preliminary survey evidence 130 ? Are the reviews themselves higher quality? This question could be answered by performing qualitative content analysis of the RR reviews versus regular reviews published by journals that employ an open review policy. The comparison would be especially powerful for journals such as Meta-Psychology , which also publishes the reviews of rejected articles.

In the long term, what contribution are RRs making to theory or applications? Can we measure their costs and benefits to health, society and the economy 81 ?

Lack of standardization

Previous analyses by Hardwicke et al. 77 and Scheel et al. 49 show that RRs are registered and reported inconsistently and, in many cases, even lack sufficient information to determine the specific hypotheses. This lack of specificity probably arises from the incompatibility between the seventeenth-century traditional manuscript format—involving discursive and often vague prose documentation—and the demand for precision within RRs. An RR should articulate falsifiable predictions that are linked to specific sampling plans, inferential analyses and contingent interpretations given different outcomes.

Stage 1 delay and bureaucratic tennis

Despite the fact that RRs might, in aggregate, lead to more efficient knowledge generation compared with regular articles (Table 1 ), the fact remains that the stage 1 review time typically adds a period of several months between submitting a stage 1 manuscript and the commencement of the research. This downtime can present a substantial barrier for researchers on short-term contracts or who hold grants that demand immediate data acquisition. Furthermore, in fields that require very specific ethical approval, such as the clinical sciences, authors can find themselves locked into a time-consuming tennis match between the journal and their ethics committee, both of which can insist on approving a precisely specified protocol (Fig. 3 ).

figure 3

One frequently asked question is when authors should obtain ethics committee (EC) or institutional review board (IRB) approval for their stage 1 RR. The answer depends on the tolerance of the EC/IRB for methodological flexibility and the requirements of the journal’s policy. Where the EC/IRB permits flexibility (left track), it is usually most efficient to obtain a generic EC/IRB approval before manuscript submission. Where the EC/IRB will instead only approve a precise protocol, and any deviations to the protocol must be submitted for reapproval, the most efficient course of action depends on the specific journal requirements (right track). Most RRs proceed via the left track, but most RR policies also leave the door open for authors to discuss barriers arising from EC/IRB rigidity. For example, Cortex requires that “all necessary ... approvals (e.g. ethics) are in place for the proposed research. Note that manuscripts will be generally considered only for studies that are able to commence immediately; however, authors with alternative plans are encouraged to contact the journal office for advice .” (emphasis added).

What does the next decade and beyond hold for RRs? The gradual rise of the format has unlocked a range of possibilities for expansion and innovation while also posing challenges for implementation and quality control. Here, we consider some of the major possible developments as RRs scale up. We also reflect on the key outstanding questions for metascience (Box 4 ) and consider how RRs may influence systems for evaluating research and researchers.

Improving efficiency

Perhaps the greatest limitation of the RR format is the time taken for submissions to be reviewed at stage 1 and receive IPA, thus delaying the commencement of research (see the section “ Limitations and drawbacks ”). While it can be argued that elimination of publication bias offsets this cost at a community level, and that the stage 1 delay could improve quality by increasing start-up costs 81 , this downtime nevertheless reduces accessibility of the format to individual researchers and can make it prohibitive for short-term projects. Here, we consider four innovations that could substantially accelerate stage 1 review without reducing quality.

Rapid review

One way to accelerate RR review is to create a network in which reviewers agree to evaluate submissions within a short time frame. In 2020, Royal Society Open Science became the first journal to launch such a network for RRs related to the COVID-19 pandemic 82 . As part of this special initiative, the journal calls for submissions that are relevant to any aspect of COVID-19 in any field, including biological, medical, economic and psychological research, while also seeking specialist reviewers who are willing and able to evaluate stage 1 RRs within 24–48 h of accepting a review request. To date, nearly 900 scientists across a range of disciplines have joined the reviewer network, which is also accessible to other journals. To gain access, the journal must commit to rapid peer review of COVID-19 RRs—striving for 7 days for the initial stage 1 review round—and waive all article processing charges. Since then, 11 additional journals have joined the network, including Nature Human Behaviour , Nature Communications and PLoS Biology . To date, Royal Society Open Science has published six stage 2 RRs arising from the initiative, and additional submission statistics are available in the Supplementary Note 83 , 84 , 85 , 86 , 87 , 88 .

Scheduled review

To date, RRs at all journals are subjected to the same serial review process as regular articles. At stage 1, the manuscript is received and undergoes editorial triage; if it meets minimal requirements, editors then seek and obtain specialist reviews, ideally leading to IPA following revision and, in many cases, re-review (Fig. 4a ). Despite prompt engagement by editors and reviewers, this process can take several months to achieve resolution. An alternative approach is to perform key elements of the initial stage 1 assessment in parallel (Fig. 4b ). Under this model, authors initially submit a short, structured protocol for consideration before writing the stage 1 manuscript. If this passes editorial triage, then reviewers are invited to assess a complete stage 1 manuscript at a fixed date in the future (for example, 6 weeks ahead). During this time, the authors write and submit their complete stage 1 manuscript, which is then reviewed on the scheduled date or during a short range of dates. With sufficient contingencies in place, this modified review process could reduce the initial stage 1 review time (but not re-review time) from weeks/months down to a matter of days.

figure 4

Restructuring the RR submission workflow could considerably reduce the duration of stage 1 peer review (dashed red arrows). a , In the typical RR chronology, the total time taken for editors to triage submissions, acquire reviewers, obtain reviews and reach an editorial decision accumulates serially and only after authors have prepared and submitted a full manuscript. b , Scheduled review could accelerate this process by performing key tasks in parallel. Rather than submitting a full manuscript, authors would initially submit a one-page, template-based RR ‘snapshot’ that undergoes editorial triage. If deemed suitable, the editor would then organize the review process for a fixed future date (or a short range of dates) while the authors prepare the full manuscript. Although this process could only feasibly expedite the first round of stage 1 review (and not the re-review of a revised stage 1 submission), the overall time-saving process could be substantial since the first round of assessment is usually the most onerous. Although no journals currently offer scheduled review, the workflow has been recently introduced as part of the PPI RR initiative ( https://rr.peercommunityin.org/ ).

Observer–evaluator review and ‘rolling IPA’

A more radical alternative to accelerated review and scheduled review would be to abolish the current peer review system altogether, replacing the serial assessment of documents—arguably a throwback to the seventeenth-century exchange of letters—with a more dynamic observer–evaluator mechanism. Authors could use an existing infrastructure such as the Open Science Framework (and associated add-ons) to create a virtual laboratory space containing the project rationale, study protocol(s), code and data as applicable. Reviewers could then be parachuted in as virtual observers, monitoring, commenting and approving specific components as the research unfolds in real-time, with the editor providing guidance, monitoring and oversight. This system could make RRs more compatible with the rapid sequential workflow that is common in fields such as chemical biology, virology and psychophysics, where the results of one experiment often lead within days to the design and implementation of the next experiment. As each update to the protocol is approved, IPA could be rolled over and extended.

RR funding models

One of the major barriers to RRs in clinical research is the additional bureaucracy imposed by stage 1 review. Many researchers already face multiple pre-study hurdles, including grant review, ethics review and, in some cases, regulatory review, and all before even contemplating a stage 1 RR submission. One promising solution to this problem is for journals and funders to perform concurrent or near concurrent reviews of RR proposals. Under this partnership model, which was first trialled in 2017 (refs. 89 , 90 ), authors submit a stage 1 proposal to a journal and funder, either simultaneously or in succession. Following assessment by both parties (either separately or as part of a joint process), if expert reviews are favourable, then IPA and funding are awarded in synchrony, reducing two pre-study review phases into one. This mechanism could be further enhanced by incorporating ethics and regulatory review, which would be particularly useful for clinical trials for which these phases of review often require the assessment of a detailed and precise protocol.

Improving quality, accountability and rewards

Alongside improvements in efficiency, the next decade is likely to see a range of measures to optimize the openness and reliability of both RRs and the assessment of RRs. In this final section, we consider six key innovations, including computationally generated RRs, mandatory RRs for clinical trials, the development of a RR training and accreditation process for journal editors, tools for monitoring the speed and quality of journal assessment, ways to improve the recognition of reviewer contributions and emerging steps to incorporate RRs into formal research evaluation.

Computationally generated RRs

Quality control of accepted protocols is limited by the subjectivity of the review process and variable implementation of RR criteria across journals and fields. As previously noted (see the section “ Lack of standardization ”), the specificity of hypotheses, statistical tests and interpretations may not always be achievable with the current format. It may therefore be unrealistic to expect authors to achieve the required level of precision without tools that can guide them through this process, such as RR study design templates or software (for example, the ScienceVerse project developed by L. DeBruine and D. Lakëns available at https://scienceverse.github.io/scienceverse/ ). For RRs that test specific confirmatory hypotheses, all preregistered hypotheses and statistical predictions could also become machine-readable 91 . A machine-readable output can be used to extract key metadata from the RR, as the results either confirm or disconfirm the prespecified hypotheses. Evaluating a stage 2 manuscript on the basis of adherence to preregistered statistical predictions would then become a more efficient and standardized process for the reviewers and editors.

Several preregistration templates are available for assisting researchers in communicating their study plans in a concise and structured format 92 , and some journals also implement their own protocol-coding checklists with strict criteria for IPA 93 (for example, https://osf.io/6bv27/ ). These template and checklists can be time-consuming to complete on top of manuscript preparation; therefore, an essential innovation for RRs will be the creation of a user-friendly web-based RR generation tool that guides the authors through the implementation of stringent criteria, including precise specification of hypotheses and linking with sampling plans, analysis plans, inference criteria and contingent interpretation given different outcomes. The tool would ideally accommodate a wide range of disciplines, similar to the experimental design assistant offered by NC3Rs ( https://www.nc3rs.org.uk/experimental-design-assistant-eda ), thereby producing a standardized, submission-ready stage 1 protocol.

Mandatory RRs for clinical trials

In basic (non-clinical) research, RRs are usually proposed as an additional option for authors rather than a requirement, which makes sense given the vital importance of exploratory science. However, we believe a strong case can be made for all clinical trials to be conducted and reported exclusively as RRs. Even though trial registration is now the norm, registration does not guarantee that trials are preregistered rather than ‘post-registered’ 94 , 95 , that trial results will be reported free from bias 96 , 97 or that the results will be published at all 98 , 99 . With trials being vulnerable to all the same publication and reporting biases that afflict basic research, and with the first RR model for clinical trials now available at BMC Medicine 100 , the next decade will hopefully see mounting pressure on clinical trial funders and major medical journals to embrace the format, ideally via RR funding models to maximize efficiency.

Training and accreditation for editors

As a general rule, the standard with which a journal reviews and administers RRs can never exceed its standard of RR editing. For this reason, it is therefore crucial that editors have the required skills and training. Guidelines to assist editors in evaluating stage 1 RR submissions are available in most RR-adopting journals or at the Center for Open Science ( https://cos.io/rr/ ), and the PCI RR initiative requires new editors (called “recommenders”) to pass a 2 h entrance test. The expansion of RRs into new disciplines and less familiar terrain is bound to introduce variability in the standard of editing. Therefore, an important future step for increasing and standardizing the quality of RR editing will be to provide editors with training materials that are tailored for the busy schedules of academics and possibly using a massive open online course. In this way, editors, and perhaps entire journals, could receive accreditation for their knowledge and understanding of the RR process, including criteria for manuscript acceptance/rejection at stage 1 (IPA) and stage 2. This system would equip authors with data to inform their choice of journal and readers with the confidence in the standard of RR editing across journals.

Community monitoring and feedback

Related to the issue of editorial training is the issue of journal accountability. At present, there is little information available for authors to judge the quality of editing and review at a RR-adopting journal, apart from word of mouth and conventional (probably uninformative) indicators such as journal prestige 55 . We believe journals should regularly publish all data on the number of RR submissions received, rejection rates at the different stages and time spent under review. In addition, open review policies and a Yelp-style website in which authors and reviewers could leave anonymous feedback ratings on the quality of the editorial process would provide an incentive for journals to maintain high standards.

Reviewer recognition

Reviewers often make major contributions to RRs that are not transparently recognized. During stage 1, reviewers can recommend major changes in the study design, hypotheses, methods and analyses, contributions that would readily justify authorship if made outside the review process. One way to ensure that reviewers are properly credited for their contribution to RRs is for a journal to adopt an open, signed review policy. Although most journals employ closed, anonymous peer review, a minority of RR-adopting journals, such as Royal Society Open Science and Meta-Psychology , publish the accepted article alongside the reviews, which reviewers can sign in order to increase transparency, credit and accountability. Reviewers can further list their contributions in online platforms such as Publons ( https://publons.com/ ). We believe it is important for reviewers, and especially ECR reviewers, to have publicly available evidence demonstrating the quality of their reviewing. To further recognize this contribution, one possibility would be to create a ‘reviewer contributor’ role that formally acknowledges the intellectual input of reviewers to the final RR, without being named as an author. This role could also be recognized through use of the CRediT taxonomy ( https://casrai.org/credit/ ).

Research evaluation

To become normative in the long term, RRs will need to be recognized within formal systems for evaluating research quality. In the United Kingdom, there are already promising moves in this direction. In the 2021 Research Excellence Framework—a regular national exercise for assessing research quality and apportioning public funds—RRs are specifically noted as an indicator of research rigour 101 , which in turn means that authors who publish RRs could attract increased funding for their institutions. Following the recent formation of the UK Reproducibility Network ( http://www.ukrn.org ), institutions are also signalling their support for RRs 102 . University College London “strongly encourages” researchers to use RRs where appropriate 103 , an approach echoed by learned societies including the British Neuroscience Association 104 and the British Psychological Society 105 . The Norwegian funder Stiftelsen Dam also recommends that grantees consider publishing their research in the form of RRs 106 , while the Templeton World Charity Foundation goes so far as to mandate RRs for certain funding schemes 107 . The next 5 years will hopefully see international expansion in the recognition of RRs at all stages of evaluation, from research outputs and grant applications to the criteria for employment and promotions. It is crucial that such judgments are applied cautiously with continual reference to ongoing metascience that will establish evidence of the costs and benefits of the format (Box 4 ).

In this Review, we reflected on the history, preliminary impacts and future potential of the RR initiative. For the past 8 years, the life and social sciences have embarked on a journey into the unknown—one that has been mooted for decades but has only now reached open waters. Early suggestions of impact are promising, with RRs more likely to disconfirm a priori hypotheses and to be computationally reproducible, while also receiving higher quality ratings and the same or higher attention through citations. The prospects of the initiative now hinge on more detailed metascience, while addressing limitations and maintaining quality control as the format scales up and into new disciplines. As we look into the next decade, we believe RRs are showing all the signs of becoming a powerful antidote to reporting and publication bias, realigning incentives to ensure that the practices that are best for science—transparent, reproducible, accurate reporting—also serve the interests of individual scientists.

Vazire, S. Implications of the credibility revolution for productivity, creativity, and progress: Perspect. Psychol. Sci . https://doi.org/10.1177/1745691617751884 (2018).

Munafò, M. R. et al. A manifesto for reproducible science. Nat. Hum. Behav. 1 , 0021 (2017).

Article   PubMed   PubMed Central   Google Scholar  

Reproducibility and Reliability of Biomedical Research: Improving Research Practice (The Academy of Medical Sciences, 2015); https://acmedsci.ac.uk/file-download/38189-56531416e2949.pdf

Fanelli, D. “Positive” results increase down the hierarchy of the sciences. PLoS ONE 5 , e10068 (2010).

Franco, A., Malhotra, N. & Simonovits, G. Publication bias in the social sciences: unlocking the file drawer. Science 345 , 1502–1505 (2014).

Article   CAS   PubMed   Google Scholar  

Harrison, J. S., Banks, G. C., Pollack, J. M., O’Boyle, E. H. & Short, J. Publication bias in strategic management research. J. Manag . https://doi.org/10.1177/0149206314535438 (2014).

Jennions, M. D. & Møller, A. P. Publication bias in ecology and evolution: an empirical assessment using the ‘trim and fill’ method. Biol. Rev. 77 , 211–222 (2002).

Article   PubMed   Google Scholar  

Kerr, N. L. HARKing: hypothesizing after the results are known. Personal. Soc. Psychol. Rev. 2 , 196–217 (1998).

Article   CAS   Google Scholar  

Bruns, S. B. & Ioannidis, J. P. A. p -Curve and p -Hacking in observational research. PLoS ONE 11 , e0149144 (2016).

Khan, M. J. & Trønnes, P. C. p -Hacking in experimental audit research. Behav. Res. Account. 31 , 119–131 (2018).

Article   Google Scholar  

Holman, L., Head, M. L., Lanfear, R. & Jennions, M. D. Evidence of experimental bias in the life sciences: why we need blind data recording. PLoS Biol. 13 , e1002190 (2015).

Fiedler, K. & Schwarz, N. Questionable research practices revisited. Soc. Psychol. Personal. Sci. 7 , 45–52 (2016).

Rabelo, A. L. A. et al. Questionable research practices among Brazilian psychological researchers: results from a replication study and an international comparison. Int. J. Psychol . https://doi.org/10.1002/ijop.12632 (2019).

Fraser, H., Parker, T., Nakagawa, S., Barnett, A. & Fidler, F. Questionable research practices in ecology and evolution. PLoS ONE 13 , e0200303 (2018).

John, L. K., Loewenstein, G. & Prelec, D. Measuring the prevalence of questionable research practices with incentives for truth telling. Psychol. Sci. 23 , 524–532 (2012).

Button, K. S. et al. Power failure: why small sample size undermines the reliability of neuroscience. Nat. Rev. Neurosci. 14 , 365–376 (2013).

Wicherts, J. M., Borsboom, D., Kats, J. & Molenaar, D. The poor availability of psychological research data for reanalysis. Am. Psychol. 61 , 726–728 (2006).

Mueller-Langer, F., Fecher, B., Harhoff, D. & Wagner, G. G. Replication studies in economics—how many and which papers are chosen for replication, and why? Res. Policy 48 , 62–83 (2019).

Makel, M. C., Plucker, J. A. & Hegarty, B. Replications in psychology research: how often do they really occur? Perspect. Psychol. Sci. 7 , 537–542 (2012).

Camerer, C. F. et al. Evaluating replicability of laboratory experiments in economics. Science 351 , 1433–1436 (2016).

Davis, R. J. et al. Reproducibility project: cancer biology. eLife https://elifesciences.org/collections/9b1e83d1/reproducibility-project-cancer-biology (2014).

Open Science Collaboration. Estimating the reproducibility of psychological science. Science 349 , aac4716 (2015).

Nosek, B. A., Spies, J. R. & Motyl, M. Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspect. Psychol. Sci. 7 , 615–631 (2012).

Feynman, R. P. Cargo cult science. Eng. Sci. 37 , 10–13 (1974).

Google Scholar  

Johnson, J. A. Are research psychologists more like detectives or lawyers? Psychol. Today http://www.psychologytoday.com/blog/cui-bono/201307/are-research-psychologists-more-detectives-or-lawyers-0 (2013).

Bem, D. J. In The Compleat Academic: A Practical Guide for the Beginning Social Scientist (eds Zanna, M. P. & Darley, J. M.) Ch. 8 (Lawerence Erlbaum, 1987).

Bem, D. J. In The Compleat Academic: A Career Guide 2nd edn (eds Darley, J. M. et al.) Ch. 10 (American Psychological Association, 2003).

Fiske, S. T. In The Sage Handbook of Methods in Social Psychology (eds Sansone, C. et al.) Ch. 4 (SAGE, 2003).

Sanes, J. R. Tell me a story. eLife 8 , e50527 (2019).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Bakker, M., van Dijk, A. & Wicherts, J. M. The rules of the game called psychological science. Perspect. Psychol. Sci. 7 , 543–554 (2012).

Grand, J. A., Rogelberg, S. G., Banks, G. C., Landis, R. S. & Tonidandel, S. From outcome to process focus: fostering a more robust psychological science through Registered Reports and results—blind reviewing. Perspect. Psychol. Sci. 13 , 448–456 (2018).

Eich, E. PSCI Initiatives for 2013. https://groups.google.com/group/openscienceframework/attach/8e518ad385b642e5/PSCI%20Initiatives%20for%202013%20%2820121008%29.docx?part=0.1 (2012).

Chambers, C. Changing the culture of scientific publishing from within. NeuroChambers (8 October 2021); https://neurochambers.blogspot.com/2012/10/changing-culture-of-scientific.html

Simons, D. J. Registered Replication Reports—Stay Tuned! Daniel Simons Blog (13 May 2013); http://blog.dansimons.com/2013/05/registered-replication-reports-stay.html

Nosek, B. A. & Lakens, D. Call for proposals: special issue of social psychology on “replications of important results in social psychology”. Soc. Psychol. 44 , 59–60 (2013).

Chambers, C. D. Registered Reports: a new publishing initiative at Cortex . Cortex 49 , 609–610 (2013).

Peirce, C. S. Illustrations of the logic of science VI: deduction, induction, and hypothesis. Pop. Sci. Monthly 13 , 470–482 (1878).

Wagenmakers, E.-J., Dutilh, G. & Sarafoglou, A. The creativity–verification cycle in psychological science: new methods to combat old Iidols. Perspect. Psychol. Sci. 13 , 418–427 (2018).

Rosenthal, R. Experimenter Effects in Behavioral Research (Appleton-Century-Crofts, 1966).

Weiss, D. J. An experiment in publication: advance publication review. Appl. Psychol. Meas. 13 , 1–7 (1989).

Kupfersmid, J. Improving what is published: a model in search of an editor. Am. Psychol. 43 , 635–642 (1988).

Newcombe, R. G. Towards a reduction in publication bias. Br. Med J. Clin. Res. Ed. 295 , 656–659 (1987).

Mahoney, M. J. Publication prejudices: an experimental study of confirmatory bias in the peer review system. Cogn. Ther. Res. 1 , 161–175 (1977).

Walster, G. W. & Cleary, T. A. A proposal for a new editorial policy in the social sciences. Am. Stat. 24 , 16–19 (1970).

Wiseman, R., Watt, C. & Kornbrot, D. Registered Reports: an early example and analysis. PeerJ 7 , e6232 (2019).

The Editors of the Lancet. Protocol review at The Lancet : 1997–2015. Lancet 386 , 2456–2457 (2015).

Maizey, L. & Tzavella, L. Barriers and solutions for early career researchers in tackling the reproducibility crisis in cognitive neuroscience. Cortex 113 , 357–359 (2019).

Allen, C. & Mehler, D. M. A. Open science challenges, benefits and tips in early career and beyond. PLoS Biol. 17 , e3000246 (2019).

Scheel, A. M., Schijen, M. & Lakens, D. An excess of positive results: Comparing the standard Psychology literature with Registered Reports. Adv. Meth. Pract. Psychol. Sci. 4 , 1–12 (2021).

Wicherts, J. M., Bakker, M. & Molenaar, D. Willingness to share research data is related to the strength of the evidence and the quality of reporting of statistical results. PLoS ONE 6 , e26828 (2011).

Obels, P., Lakens, D., Coles, N. A., Gottfried, J. & Green, S. A. Analysis of open data and computational reproducibility in Registered Reports in psychology. Preprint at PsyArXiv https://doi.org/10.31234/osf.io/fk8vh (2019).

Hardwicke, T. E. et al. Data availability, reusability, and analytic reproducibility: evaluating the impact of a mandatory open data policy at the journal Cognition. R. Soc. Open Sci. 5 , 180448 (2018).

Jannot, A.-S., Agoritsas, T., Gayet-Ageron, A. & Perneger, T. V. Citation bias favoring statistically significant studies was present in medical research. J. Clin. Epidemiol. 66 , 296–301 (2013).

Misemer, B. S., Platts-Mills, T. F. & Jones, C. W. Citation bias favoring positive clinical trials of thrombolytics for acute ischemic stroke: a cross-sectional analysis. Trials 17 , 473 (2016).

Brembs, B., Button, K. & Munafò, M. Deep impact: unintended consequences of journal rank. Front. Hum. Neurosci. 7 , 291 (2013).

Fang, F. C., Steen, R. G. & Casadevall, A. Misconduct accounts for the majority of retracted scientific publications. Proc. Natl Acad. Sci. USA 109 , 17028–17033 (2012).

Lozano, G. A., Larivière, V. & Gingras, Y. The weakening relationship between the impact factor and papers’ citations in the digital age. J. Am. Soc. Inf. Sci. Technol. 63 , 2140–2145 (2012).

Seglen, P. O. Why the impact factor of journals should not be used for evaluating research. BMJ 314 , 498–502 (1997).

Hummer, L., Thorn, F. S., Nosek, B. A. & Errington, T. Evaluating Registered Reports: a naturalistic comparative study of article impact. Preprint at OSF https://doi.org/10.31219/osf.io/5y8w7 (2017).

Soderberg, C. K. et al. Initial evidence of research quality of Registered Reports compared with the standard publishing model. Nat. Hum. Behav . https://doi.org/10.1038/s41562-021-01142-4 (2021).

Button, K. S., Bal, L., Clark, A. & Shipley, T. Preventing the ends from justifying the means: withholding results to address publication bias in peer-review. BMC Psychol. 4 , 59 (2016).

Srivastava, S. A Pottery Barn rule for scientific journals. The Hardest Science (27 September 2012); https://thehardestscience.com/2012/09/27/a-pottery-barn-rule-for-scientific-journals/

Lilienfeld, S. O. Clinical psychological science: then and now. Clin. Psychol. Sci. 5 , 3–13 (2017).

Lucas, R. E. & Donnellan, M. B. Enhancing transparency and openness at the Journal of Research in Personality . J. Res. Personal. 68 , 1–4 (2017).

Anonymous. Preregistered direct replications: a new article type in psychological science. APS Obs . https://www.psychologicalscience.org/observer/preregistered-direct-replications-a-new-article-type-in-psychological-science (2017).

Replication Studies (Royal Society Open Science, 2021); https://royalsocietypublishing.org/rsos/replication-studies

Reproducibility and Transparency Collection (The Royal Society, 2021); https://royalsocietypublishing.org/topic/special-collections/rsos-reproducibility

Murray, H. Transparency meets transparency. F1000 Blogs (12 October 2017); https://blog.f1000.com/2017/10/12/transparency-meets-transparency/

Carlsson, R. et al. Inaugural editorial of Meta-Psychology . Meta-Psychol. 1 , a1001 (2017).

Kiyonaga, A. & Scimeca, J. M. Practical considerations for navigating Registered Reports. Trends Neurosci. 42 , 568–572 (2019).

Guest, O. & Martin, A. E. How computational modeling can force theory building in psychological science. Perspect. Psychol. Sci . https://doi.org/10.1177/1745691620970585 (2021).

Schönbrodt, F. D. & Wagenmakers, E.-J. Bayes factor design analysis: planning for compelling evidence. Psychon. Bull. Rev. 25 , 128–142 (2018).

Nosek, B. A. et al. Promoting an open research culture. Science 348 , 1422–1425 (2015).

For Authors (BMJ Open Science, 2021); https://openscience.bmj.com/pages/authors/

Exploratory Reports at IRSP: Guidelines for Authors (International Review of Social Psychology, 2021); http://www.rips-irsp.com/about/exploratory-reports/

McIntosh, R. D. Exploratory reports: a new article type for Cortex . Cortex 96 , A1–A4 (2017).

Hardwicke, T. E. & Ioannidis, J. P. A. Mapping the universe of Registered Reports. Nat. Hum. Behav. 2 , 793–796 (2018).

Chambers, C. D. & Mellor, D. T. Protocol transparency is vital for Registered Reports. Nat. Hum. Behav. 2 , 791–792 (2018).

Center for Open Science: Template Reviewer and Author Guidelines (Open Science Framework, 2018); https://osf.io/8mpji/

OSF Registries (OSF, 2021); https://osf.io/registries/discover?provider=OSF&type=Registered%20Report%20Protocol%20Preregistration

Tiokhin, L., Morgan, T. & Yan, M. Competition for priority and the cultural evolution of research strategies. Preprint at MetaArXiv https://doi.org/10.31222/osf.io/x4t7q (2020).

Chambers, C. Calling all scientists: rapid evaluation of COVID19-related Registered Reports at Royal Society Open Science. NeuroChambers (16 March 2020); http://neurochambers.blogspot.com/2020/03/calling-all-scientists-rapid-evaluation.html

Zhou, T., Nguyen, T. T., Zhong, J. & Liu, J. A COVID-19 descriptive study of life after lockdown in Wuhan, China. R. Soc. Open Sci. 7 , 200705 (2020).

Weinstein, N. & Nguyen, T.-V. Motivation and preference in isolation: a test of their different influences on responses to self-isolation during the COVID-19 outbreak. R. Soc. Open Sci. 7 , 200458 (2020).

Khan, K. A. & Cheung, P. Presence of mismatches between diagnostic PCR assays and coronavirus SARS-CoV-2 genome. R. Soc. Open Sci. 7 , 200636 (2020).

Riello, M., Purgato, M., Bove, C., MacTaggart, D. & Rusconi, E. Prevalence of post-traumatic symptomatology and anxiety among residential nursing and care home workers following the first COVID-19 outbreak in Northern Italy. R. Soc. Open Sci. 7 , 200880 (2020).

Lieberoth, A. et al. Stress and worry in the 2020 coronavirus pandemic: relationships to trust and compliance with preventive measures across 48 countries in the COVIDiSTRESS global survey. R. Soc. Open Sci. 8 , 200589 (2021).

Yonemitsu, F. et al. Warning ‘don’t spread’ versus ‘don’t be a spreader’ to prevent the COVID-19 pandemic. R. Soc. Open Sci. 7 , 200793 (2020).

PLoS ONE Editors. PLoS ONE partners with the Children’s Tumor Foundation to trial Registered Reports. EveryONE: The PLoS ONE blog (26 September 2017); https://blogs.plos.org/everyone/2017/09/26/registered-reports-with-ctf/

Munafò, M. R. Improving the efficiency of grant and journal peer review: Registered Reports funding. Nicotine Tob. Res. 19 , 773–773 (2017).

Lakens, D. & DeBruine, L. Improving transparency, falsifiability, and rigour by making hypothesis tests machine readable. Preprint at PsyArXiv https://doi.org/10.31234/osf.io/5xcda (2020).

Mellor, D. & DeHaven, A. Templates of OSF Registration Forms (OSF, 2016); https://osf.io/zab38/

Wicherts, J. M. et al. Degrees of freedom in planning, running, analyzing, and reporting psychological studies: a checklist to avoid p -hacking. Front. Psychol. 7 , 1832 (2016).

Mathieu, S., Boutron, I., Moher, D., Altman, D. G. & Ravaud, P. Comparison of registered and published primary outcomes in randomized controlled trials. JAMA 302 , 977–984 (2009).

Gopal, A. D. et al. Adherence to the International Committee of Medical Journal Editors’ (ICMJE) prospective registration policy and implications for outcome integrity: a cross-sectional analysis of trials published in high-impact specialty society journals. Trials 19 , 448 (2018).

Goldacre, B. et al. Tracking switched outcomes in clinical trials. COMPare http://compare-trials.org (2016).

Ramagopalan, S. V. et al. Funding source and primary outcome changes in clinical trials registered on ClinicalTrials.gov are associated with the reporting of a statistically significant primary outcome: a cross-sectional study. F1000Research 4 , 80 (2015).

Goldacre, B. et al. Compliance with requirement to report results on the EU Clinical Trials Register: cohort study and web resource. BMJ 362 , k3218 (2018).

Chen, R. et al. Publication and reporting of clinical trial results: cross sectional analysis across academic medical centers. BMJ 352 , i637 (2016).

The BMC Medicine Team. BMC Medicine becomes the first medical journal to accept Registered Reports. Research in Progress Blog (24 August 2017); http://blogs.biomedcentral.com/bmcblog/2017/08/24/bmc-medicine-becomes-the-first-medical-journal-to-accept-registered-reports/

Panel Criteria and Working Methods (Research Excellence Framework, 2019); https://www.ref.ac.uk/media/1084/ref-2019_02-panel-criteria-and-working-methods.pdf

Munafò, M. Raising research quality will require collective action. Nature 576 , 183–183 (2019).

UCL Statement on Transparency in Research (University College London, 2019); https://www.ucl.ac.uk/research/sites/research/files/ucl_statement_on_transparency_in_research_november_20191.pdf

Rousselet, G. A., Hazell, G., Cooke, A. & Dalley, J. W. Promoting and supporting credibility in neuroscience. Brain Neurosci. Adv. 3 , 2398212819844167 (2019).

British Psychological Society. We’re offering Registered Reports across all eleven of our academic journals. BPS News (13 July 2018); https://www.bps.org.uk/news-and-policy/were-offering-registered-reports-across-all-eleven-our-academic-journals

Stiftelsen Dam. Krav om forhåndregistrering av studier finansiert av Stiftelsen Dam. Stiftelsen Dam (9 November 2018); https://dam.no/krav-om-forhandsregistrering-av-studier-finansiert-av-stiftelsen-dam/

Accelerating Research on Consciousness (Templeton World Charity Foundation, 2021); https://www.templetonworldcharity.org/our-priorities/accelerating-research-consciousness

Heycke, T., Aust, F. & Stahl, C. Subliminal influence on preferences? A test of evaluative conditioning for brief visual conditioned stimuli using auditory unconditioned stimuli. R. Soc. Open Sci. 4 , 160935 (2017).

Ait Ouares, K., Beurrier, C., Canepari, M., Laverne, G. & Kuczewski, N. Opto nongenetics inhibition of neuronal firing. Eur. J. Neurosci. 49 , 6–26 (2019).

Sassenhagen, J. & Bornkessel-Schlesewsky, I. The P600 as a correlate of ventral attention network reorientation. Cortex 66 , A3–A20 (2015).

Allinson, M. Royal Society Open Science launches Registered Reports. The Royal Society Blog (27 November 2015); https://web.archive.org/web/20160702062134/https://blogs.royalsociety.org/publishing/registered-reports/

Nosek, B. A. & Errington, T. M. Reproducibility in cancer biology: making sense of replications. eLife 6 , e23383 (2017).

Guo, W., Del Vecchio, M. & Pogrebna, G. Global network centrality of university rankings. R. Soc. Open Sci. 4 , 171172 (2017).

[No authors listed]. Promoting reproducibility with Registered Reports. Nat. Hum. Behav . https://doi.org/10.1038/s41562-016-0034 (2017).

MacCoun, R. & Perlmutter, S. Blind analysis: hide results to seek the truth. Nature 526 , 187–189 (2015).

Dutilh, G. et al. A test of the diffusion model explanation for the worst performance rule using preregistration and blinding. Atten. Percept. Psychophys. 79 , 713–725 (2017).

Dienes, Z. Using Bayes to get the most out of non-significant results. Front. Psychol. 5 , 781 (2014).

Lakens, D. Equivalence tests: a practical primer for t tests, correlations, and meta-analyses. Soc. Psychol. Personal. Sci. 8 , 355–362 (2017).

Preregistered Research Article Guidelines for Authors ( PLoS Biology , 2020); https://plos-marketing.s3.amazonaws.com/Marketing/Biology+Preregistered+Articles+Guidelines+for+Authors.pdf

Registered Reports: Author and Reviewer Guidelines ( Nature Human Behaviour , 2021); https://media.nature.com/original/nature-cms/uploads/ckeditor/attachments/4127/RegisteredReportsGuidelines_NatureHumanBehaviour.pdf

Guidelines for Reviewers ( Cortex , 2013); https://cdn.elsevier.com/promis_misc/PROMIS%20pub_idt_CORTEX%20Guidelines_RR_29_04_2013.pdf

Royal Society Open Science. Registered Reports (The Royal Society, 2021); https://royalsocietypublishing.org/rsos/registered-reports#ReviewerGuideRegRep

Registered Reports: Resources for Editors (Center for Open Science, 2021); https://cos.io/rr/

Petticrew, M. et al. Publication bias in qualitative research: what becomes of qualitative research presented at conferences? J. Epidemiol. Community Health 62 , 552–554 (2008).

Piñeiro, R. & Rosenblatt, F. Pre-analysis plans for qualitative research. Rev. Cienc. Política 36 , 785–796 (2016).

Kern, F. G. & Gleditsch, K. S. Exploring pre-registration and pre-analysis plans for qualitative inference. Preprint at ResearchGate https://doi.org/10.13140/RG.2.2.14428.69769 (2017).

Haven, T. L. & Grootel, D. L. V. Preregistering qualitative research. Account. Res. 26 , 229–244 (2019).

Hartman, A., Kern, F. & Mellor, D. Preregistration for Qualitative Research Template (OSF, 2018); https://osf.io/j7ghv/

Mehlenbacher, A. R. Registered Reports: genre evolution and the research article. Writ. Commun. 36 , 38–67 (2019).

DeHaven, A. C. et al. Registered Reports: views from editors, reviewers and authors. Preprint at MetaArXiv https://doi.org/10.31222/osf.io/ndvek (2019).

Download references

Acknowledgements

We are grateful to D. Mellor, B. Nosek and the Center for Open Science for their ongoing collaboration and discussion, to A. O’Mahony for the collation of key statistics concerning published RRs, and to the many authors, reviewers and editors who have supported the RR initiative.

Author information

Authors and affiliations.

Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff University, Cardiff, UK

Christopher D. Chambers & Loukia Tzavella

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Christopher D. Chambers .

Ethics declarations

Competing interests.

C.D.C. is a member of the Advisory Board of Nature Human Behaviour , is chair of the RRs committee supported by the Center for Open Science, is a co-founder of Peer Community in Registered Reports , and currently serves as RR editor at BMJ Open Science , Cortex , European Journal of Neuroscience , NeuroImage , Neuroimage: Reports , PLoS Biology and Royal Society Open Science .

Additional information

Peer review information Nature Human Behaviour thanks Bert Bakker and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary information.

Supplementary Figs. 1–3, Supplementary Table 1 and Supplementary Note.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Chambers, C.D., Tzavella, L. The past, present and future of Registered Reports. Nat Hum Behav 6 , 29–42 (2022). https://doi.org/10.1038/s41562-021-01193-7

Download citation

Received : 10 February 2020

Accepted : 05 August 2021

Published : 15 November 2021

Issue Date : January 2022

DOI : https://doi.org/10.1038/s41562-021-01193-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Current best practices and future opportunities for reproducible findings using large-scale neuroimaging in psychiatry.

  • Neda Jahanshad
  • Petra Lenzini
  • Janine Bijsterbosch

Neuropsychopharmacology (2024)

Association between resting-state connectivity patterns in the defensive system network and treatment response in spider phobia—a replication approach

  • Elisabeth J. Leehr
  • Fabian R. Seeger
  • Ulrike Lueken

Translational Psychiatry (2024)

Registered Report for climate research

Nature Climate Change (2024)

Integrating the Philosophy and Psychology of Well-Being: An Opinionated Overview

  • James L. D. Brown
  • Sophie Potter

Journal of Happiness Studies (2024)

Seeing more than the Tip of the Iceberg: Approaches to Subthreshold Effects in Functional Magnetic Resonance Imaging of the Brain

  • Benedikt Sundermann
  • Bettina Pfleiderer
  • Christian Mathys

Clinical Neuroradiology (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

article the before research

Finding Scholarly Articles: Home

Profile Photo

What's a Scholarly Article?

Your professor has specified that you are to use scholarly (or primary research or peer-reviewed or refereed or academic) articles only in your paper. What does that mean?

Scholarly or primary research articles are peer-reviewed , which means that they have gone through the process of being read by reviewers or referees  before being accepted for publication. When a scholar submits an article to a scholarly journal, the manuscript is sent to experts in that field to read and decide if the research is valid and the article should be published. Typically the reviewers indicate to the journal editors whether they think the article should be accepted, sent back for revisions, or rejected.

To decide whether an article is a primary research article, look for the following:

  • The author’s (or authors') credentials and academic affiliation(s) should be given;
  • There should be an abstract summarizing the research;
  • The methods and materials used should be given, often in a separate section;
  • There are citations within the text or footnotes referencing sources used;
  • Results of the research are given;
  • There should be discussion   and  conclusion ;
  • With a bibliography or list of references at the end.

Caution: even though a journal may be peer-reviewed, not all the items in it will be. For instance, there might be editorials, book reviews, news reports, etc. Check for the parts of the article to be sure.   

You can limit your search results to primary research, peer-reviewed or refereed articles in many databases. To search for scholarly articles in  HOLLIS , type your keywords in the box at the top, and select  Catalog&Articles  from the choices that appear next.   On the search results screen, look for the  Show Only section on the right and click on  Peer-reviewed articles . (Make sure to  login in with your HarvardKey to get full-text of the articles that Harvard has purchased.)

Many of the databases that Harvard offers have similar features to limit to peer-reviewed or scholarly articles.  For example in Academic Search Premier , click on the box for Scholarly (Peer Reviewed) Journals  on the search screen.

Review articles are another great way to find scholarly primary research articles.   Review articles are not considered "primary research", but they pull together primary research articles on a topic, summarize and analyze them.  In Google Scholar , click on Review Articles  at the left of the search results screen. Ask your professor whether review articles can be cited for an assignment.

A note about Google searching.  A regular Google search turns up a broad variety of results, which can include scholarly articles but Google results also contain commercial and popular sources which may be misleading, outdated, etc.  Use Google Scholar  through the Harvard Library instead.

About Wikipedia .  W ikipedia is not considered scholarly, and should not be cited, but it frequently includes references to scholarly articles. Before using those references for an assignment, double check by finding them in Hollis or a more specific subject  database .

Still not sure about a source? Consult the course syllabus for guidance, contact your professor or teaching fellow, or use the Ask A Librarian service.

  • Last Updated: Oct 3, 2023 3:37 PM
  • URL: https://guides.library.harvard.edu/FindingScholarlyArticles

Harvard University Digital Accessibility Policy

IMAGES

  1. Understanding Research Paper

    article the before research

  2. How to Write a Research Article

    article the before research

  3. Investment Executive Report Card 2022 Calendar

    article the before research

  4. Difference Between Research Article and Research Paper

    article the before research

  5. Reading a Scholarly Article

    article the before research

  6. Journée entre les îles de Lérins

    article the before research

COMMENTS

  1. Approaching literature review for academic purposes: The...

    A sophisticated literature review (LR) can result in a robust dissertation/thesis by scrutinizing the main problem examined by the academic study; anticipating research hypotheses, methods and results; and maintaining the interest of the audience in how the dissertation/thesis will provide solutions for the current gaps in a particular field.

  2. Peer Review in Scientific Publications: Benefits, Critiques ...

    Some scientists see peer review as a chance to become aware of the latest research before their peers, and thus be first to develop new insights from the material. Finally, in terms of career development, peer reviewing can be desirable as it is often noted on one’s resume or CV.

  3. A Beginner's Guide to Starting the Research Process - Scribbr

    This article takes you through the first steps of the research process, helping you narrow down your ideas and build up a strong foundation for your research project. Table of contents. Step 1: Choose your topic. Step 2: Identify a problem. Step 3: Formulate research questions.

  4. How to Read a Scholarly Article - Evaluating Information ...

    Scanning and skimming are essential when reading scholarly articles, especially at the beginning stages of your research or when you have a lot of material in front of you. Many scholarly articles are organized to help you scan and skim efficiently.

  5. What is Peer Review? - Articles: Finding (and Identifying ...

    What is "Peer-Review"? What are they? Scholarly articles are papers that describe a research study. Why are scholarly articles useful? They report original research projects that have been reviewed by other experts before they are accepted for publication, so you can reasonably be assured that they contain valid information.

  6. Understanding Peer Review in Science

    Peer review is an essential element of the scientific publishing process that helps ensure that research articles are evaluated, critiqued, and improved before release into the academic community.

  7. The past, present and future of Registered Reports | Nature ...

    153 Citations. 180 Altmetric. Metrics. Abstract. Registered Reports are a form of empirical publication in which study proposals are peer reviewed and pre-accepted before research is...

  8. Home - Finding Scholarly Articles - Research Guides at ...

    What's a Scholarly Article? Your professor has specified that you are to use scholarly (or primary research or peer-reviewed or refereed or academic) articles only in your paper. What does that mean?

  9. How to write a good scientific review article - Dhillon ...

    A good review article requires careful planning. Rather than diving deep into the writing, it's best to take time to think about what and how you will write and to draw up an outline of the text and graphics.

  10. The Importance of the Internal Review Board for Approving ...

    The first step in responsible research is learning the role and process of the hospital review board and/or an internal review board (IRB). What Is an IRB? The IRB is a review board that evaluates research that involves human or animal participants.