• Methodology
  • Open access
  • Published: 11 October 2016

Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research

  • Stephen J. Gentles 1 , 4 ,
  • Cathy Charles 1 ,
  • David B. Nicholas 2 ,
  • Jenny Ploeg 3 &
  • K. Ann McKibbon 1  

Systematic Reviews volume  5 , Article number:  172 ( 2016 ) Cite this article

54k Accesses

27 Citations

13 Altmetric

Metrics details

Overviews of methods are potentially useful means to increase clarity and enhance collective understanding of specific methods topics that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness. This type of review represents a distinct literature synthesis method, although to date, its methodology remains relatively undeveloped despite several aspects that demand unique review procedures. The purpose of this paper is to initiate discussion about what a rigorous systematic approach to reviews of methods, referred to here as systematic methods overviews , might look like by providing tentative suggestions for approaching specific challenges likely to be encountered. The guidance offered here was derived from experience conducting a systematic methods overview on the topic of sampling in qualitative research.

The guidance is organized into several principles that highlight specific objectives for this type of review given the common challenges that must be overcome to achieve them. Optional strategies for achieving each principle are also proposed, along with discussion of how they were successfully implemented in the overview on sampling. We describe seven paired principles and strategies that address the following aspects: delimiting the initial set of publications to consider, searching beyond standard bibliographic databases, searching without the availability of relevant metadata, selecting publications on purposeful conceptual grounds, defining concepts and other information to abstract iteratively, accounting for inconsistent terminology used to describe specific methods topics, and generating rigorous verifiable analytic interpretations. Since a broad aim in systematic methods overviews is to describe and interpret the relevant literature in qualitative terms, we suggest that iterative decision making at various stages of the review process, and a rigorous qualitative approach to analysis are necessary features of this review type.

Conclusions

We believe that the principles and strategies provided here will be useful to anyone choosing to undertake a systematic methods overview. This paper represents an initial effort to promote high quality critical evaluations of the literature regarding problematic methods topics, which have the potential to promote clearer, shared understandings, and accelerate advances in research methods. Further work is warranted to develop more definitive guidance.

Peer Review reports

While reviews of methods are not new, they represent a distinct review type whose methodology remains relatively under-addressed in the literature despite the clear implications for unique review procedures. One of few examples to describe it is a chapter containing reflections of two contributing authors in a book of 21 reviews on methodological topics compiled for the British National Health Service, Health Technology Assessment Program [ 1 ]. Notable is their observation of how the differences between the methods reviews and conventional quantitative systematic reviews, specifically attributable to their varying content and purpose, have implications for defining what qualifies as systematic. While the authors describe general aspects of “systematicity” (including rigorous application of a methodical search, abstraction, and analysis), they also describe a high degree of variation within the category of methods reviews itself and so offer little in the way of concrete guidance. In this paper, we present tentative concrete guidance, in the form of a preliminary set of proposed principles and optional strategies, for a rigorous systematic approach to reviewing and evaluating the literature on quantitative or qualitative methods topics. For purposes of this article, we have used the term systematic methods overview to emphasize the notion of a systematic approach to such reviews.

The conventional focus of rigorous literature reviews (i.e., review types for which systematic methods have been codified, including the various approaches to quantitative systematic reviews [ 2 – 4 ], and the numerous forms of qualitative and mixed methods literature synthesis [ 5 – 10 ]) is to synthesize empirical research findings from multiple studies. By contrast, the focus of overviews of methods, including the systematic approach we advocate, is to synthesize guidance on methods topics. The literature consulted for such reviews may include the methods literature, methods-relevant sections of empirical research reports, or both. Thus, this paper adds to previous work published in this journal—namely, recent preliminary guidance for conducting reviews of theory [ 11 ]—that has extended the application of systematic review methods to novel review types that are concerned with subject matter other than empirical research findings.

Published examples of methods overviews illustrate the varying objectives they can have. One objective is to establish methodological standards for appraisal purposes. For example, reviews of existing quality appraisal standards have been used to propose universal standards for appraising the quality of primary qualitative research [ 12 ] or evaluating qualitative research reports [ 13 ]. A second objective is to survey the methods-relevant sections of empirical research reports to establish current practices on methods use and reporting practices, which Moher and colleagues [ 14 ] recommend as a means for establishing the needs to be addressed in reporting guidelines (see, for example [ 15 , 16 ]). A third objective for a methods review is to offer clarity and enhance collective understanding regarding a specific methods topic that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness within the available methods literature. An example of this is a overview whose objective was to review the inconsistent definitions of intention-to-treat analysis (the methodologically preferred approach to analyze randomized controlled trial data) that have been offered in the methods literature and propose a solution for improving conceptual clarity [ 17 ]. Such reviews are warranted because students and researchers who must learn or apply research methods typically lack the time to systematically search, retrieve, review, and compare the available literature to develop a thorough and critical sense of the varied approaches regarding certain controversial or ambiguous methods topics.

While systematic methods overviews , as a review type, include both reviews of the methods literature and reviews of methods-relevant sections from empirical study reports, the guidance provided here is primarily applicable to reviews of the methods literature since it was derived from the experience of conducting such a review [ 18 ], described below. To our knowledge, there are no well-developed proposals on how to rigorously conduct such reviews. Such guidance would have the potential to improve the thoroughness and credibility of critical evaluations of the methods literature, which could increase their utility as a tool for generating understandings that advance research methods, both qualitative and quantitative. Our aim in this paper is thus to initiate discussion about what might constitute a rigorous approach to systematic methods overviews. While we hope to promote rigor in the conduct of systematic methods overviews wherever possible, we do not wish to suggest that all methods overviews need be conducted to the same standard. Rather, we believe that the level of rigor may need to be tailored pragmatically to the specific review objectives, which may not always justify the resource requirements of an intensive review process.

The example systematic methods overview on sampling in qualitative research

The principles and strategies we propose in this paper are derived from experience conducting a systematic methods overview on the topic of sampling in qualitative research [ 18 ]. The main objective of that methods overview was to bring clarity and deeper understanding of the prominent concepts related to sampling in qualitative research (purposeful sampling strategies, saturation, etc.). Specifically, we interpreted the available guidance, commenting on areas lacking clarity, consistency, or comprehensiveness (without proposing any recommendations on how to do sampling). This was achieved by a comparative and critical analysis of publications representing the most influential (i.e., highly cited) guidance across several methodological traditions in qualitative research.

The specific methods and procedures for the overview on sampling [ 18 ] from which our proposals are derived were developed both after soliciting initial input from local experts in qualitative research and an expert health librarian (KAM) and through ongoing careful deliberation throughout the review process. To summarize, in that review, we employed a transparent and rigorous approach to search the methods literature, selected publications for inclusion according to a purposeful and iterative process, abstracted textual data using structured abstraction forms, and analyzed (synthesized) the data using a systematic multi-step approach featuring abstraction of text, summary of information in matrices, and analytic comparisons.

For this article, we reflected on both the problems and challenges encountered at different stages of the review and our means for selecting justifiable procedures to deal with them. Several principles were then derived by considering the generic nature of these problems, while the generalizable aspects of the procedures used to address them formed the basis of optional strategies. Further details of the specific methods and procedures used in the overview on qualitative sampling are provided below to illustrate both the types of objectives and challenges that reviewers will likely need to consider and our approach to implementing each of the principles and strategies.

Organization of the guidance into principles and strategies

For the purposes of this article, principles are general statements outlining what we propose are important aims or considerations within a particular review process, given the unique objectives or challenges to be overcome with this type of review. These statements follow the general format, “considering the objective or challenge of X, we propose Y to be an important aim or consideration.” Strategies are optional and flexible approaches for implementing the previous principle outlined. Thus, generic challenges give rise to principles, which in turn give rise to strategies.

We organize the principles and strategies below into three sections corresponding to processes characteristic of most systematic literature synthesis approaches: literature identification and selection ; data abstraction from the publications selected for inclusion; and analysis , including critical appraisal and synthesis of the abstracted data. Within each section, we also describe the specific methodological decisions and procedures used in the overview on sampling in qualitative research [ 18 ] to illustrate how the principles and strategies for each review process were applied and implemented in a specific case. We expect this guidance and accompanying illustrations will be useful for anyone considering engaging in a methods overview, particularly those who may be familiar with conventional systematic review methods but may not yet appreciate some of the challenges specific to reviewing the methods literature.

Results and discussion

Literature identification and selection.

The identification and selection process includes search and retrieval of publications and the development and application of inclusion and exclusion criteria to select the publications that will be abstracted and analyzed in the final review. Literature identification and selection for overviews of the methods literature is challenging and potentially more resource-intensive than for most reviews of empirical research. This is true for several reasons that we describe below, alongside discussion of the potential solutions. Additionally, we suggest in this section how the selection procedures can be chosen to match the specific analytic approach used in methods overviews.

Delimiting a manageable set of publications

One aspect of methods overviews that can make identification and selection challenging is the fact that the universe of literature containing potentially relevant information regarding most methods-related topics is expansive and often unmanageably so. Reviewers are faced with two large categories of literature: the methods literature , where the possible publication types include journal articles, books, and book chapters; and the methods-relevant sections of empirical study reports , where the possible publication types include journal articles, monographs, books, theses, and conference proceedings. In our systematic overview of sampling in qualitative research, exhaustively searching (including retrieval and first-pass screening) all publication types across both categories of literature for information on a single methods-related topic was too burdensome to be feasible. The following proposed principle follows from the need to delimit a manageable set of literature for the review.

Principle #1:

Considering the broad universe of potentially relevant literature, we propose that an important objective early in the identification and selection stage is to delimit a manageable set of methods-relevant publications in accordance with the objectives of the methods overview.

Strategy #1:

To limit the set of methods-relevant publications that must be managed in the selection process, reviewers have the option to initially review only the methods literature, and exclude the methods-relevant sections of empirical study reports, provided this aligns with the review’s particular objectives.

We propose that reviewers are justified in choosing to select only the methods literature when the objective is to map out the range of recognized concepts relevant to a methods topic, to summarize the most authoritative or influential definitions or meanings for methods-related concepts, or to demonstrate a problematic lack of clarity regarding a widely established methods-related concept and potentially make recommendations for a preferred approach to the methods topic in question. For example, in the case of the methods overview on sampling [ 18 ], the primary aim was to define areas lacking in clarity for multiple widely established sampling-related topics. In the review on intention-to-treat in the context of missing outcome data [ 17 ], the authors identified a lack of clarity based on multiple inconsistent definitions in the literature and went on to recommend separating the issue of how to handle missing outcome data from the issue of whether an intention-to-treat analysis can be claimed.

In contrast to strategy #1, it may be appropriate to select the methods-relevant sections of empirical study reports when the objective is to illustrate how a methods concept is operationalized in research practice or reported by authors. For example, one could review all the publications in 2 years’ worth of issues of five high-impact field-related journals to answer questions about how researchers describe implementing a particular method or approach, or to quantify how consistently they define or report using it. Such reviews are often used to highlight gaps in the reporting practices regarding specific methods, which may be used to justify items to address in reporting guidelines (for example, [ 14 – 16 ]).

It is worth recognizing that other authors have advocated broader positions regarding the scope of literature to be considered in a review, expanding on our perspective. Suri [ 10 ] (who, like us, emphasizes how different sampling strategies are suitable for different literature synthesis objectives) has, for example, described a two-stage literature sampling procedure (pp. 96–97). First, reviewers use an initial approach to conduct a broad overview of the field—for reviews of methods topics, this would entail an initial review of the research methods literature. This is followed by a second more focused stage in which practical examples are purposefully selected—for methods reviews, this would involve sampling the empirical literature to illustrate key themes and variations. While this approach is seductive in its capacity to generate more in depth and interpretive analytic findings, some reviewers may consider it too resource-intensive to include the second step no matter how selective the purposeful sampling. In the overview on sampling where we stopped after the first stage [ 18 ], we discussed our selective focus on the methods literature as a limitation that left opportunities for further analysis of the literature. We explicitly recommended, for example, that theoretical sampling was a topic for which a future review of the methods sections of empirical reports was justified to answer specific questions identified in the primary review.

Ultimately, reviewers must make pragmatic decisions that balance resource considerations, combined with informed predictions about the depth and complexity of literature available on their topic, with the stated objectives of their review. The remaining principles and strategies apply primarily to overviews that include the methods literature, although some aspects may be relevant to reviews that include empirical study reports.

Searching beyond standard bibliographic databases

An important reality affecting identification and selection in overviews of the methods literature is the increased likelihood for relevant publications to be located in sources other than journal articles (which is usually not the case for overviews of empirical research, where journal articles generally represent the primary publication type). In the overview on sampling [ 18 ], out of 41 full-text publications retrieved and reviewed, only 4 were journal articles, while 37 were books or book chapters. Since many books and book chapters did not exist electronically, their full text had to be physically retrieved in hardcopy, while 11 publications were retrievable only through interlibrary loan or purchase request. The tasks associated with such retrieval are substantially more time-consuming than electronic retrieval. Since a substantial proportion of methods-related guidance may be located in publication types that are less comprehensively indexed in standard bibliographic databases, identification and retrieval thus become complicated processes.

Principle #2:

Considering that important sources of methods guidance can be located in non-journal publication types (e.g., books, book chapters) that tend to be poorly indexed in standard bibliographic databases, it is important to consider alternative search methods for identifying relevant publications to be further screened for inclusion.

Strategy #2:

To identify books, book chapters, and other non-journal publication types not thoroughly indexed in standard bibliographic databases, reviewers may choose to consult one or more of the following less standard sources: Google Scholar, publisher web sites, or expert opinion.

In the case of the overview on sampling in qualitative research [ 18 ], Google Scholar had two advantages over other standard bibliographic databases: it indexes and returns records of books and book chapters likely to contain guidance on qualitative research methods topics; and it has been validated as providing higher citation counts than ISI Web of Science (a producer of numerous bibliographic databases accessible through institutional subscription) for several non-biomedical disciplines including the social sciences where qualitative research methods are prominently used [ 19 – 21 ]. While we identified numerous useful publications by consulting experts, the author publication lists generated through Google Scholar searches were uniquely useful to identify more recent editions of methods books identified by experts.

Searching without relevant metadata

Determining what publications to select for inclusion in the overview on sampling [ 18 ] could only rarely be accomplished by reviewing the publication’s metadata. This was because for the many books and other non-journal type publications we identified as possibly relevant, the potential content of interest would be located in only a subsection of the publication. In this common scenario for reviews of the methods literature (as opposed to methods overviews that include empirical study reports), reviewers will often be unable to employ standard title, abstract, and keyword database searching or screening as a means for selecting publications.

Principle #3:

Considering that the presence of information about the topic of interest may not be indicated in the metadata for books and similar publication types, it is important to consider other means of identifying potentially useful publications for further screening.

Strategy #3:

One approach to identifying potentially useful books and similar publication types is to consider what classes of such publications (e.g., all methods manuals for a certain research approach) are likely to contain relevant content, then identify, retrieve, and review the full text of corresponding publications to determine whether they contain information on the topic of interest.

In the example of the overview on sampling in qualitative research [ 18 ], the topic of interest (sampling) was one of numerous topics covered in the general qualitative research methods manuals. Consequently, examples from this class of publications first had to be identified for retrieval according to non-keyword-dependent criteria. Thus, all methods manuals within the three research traditions reviewed (grounded theory, phenomenology, and case study) that might contain discussion of sampling were sought through Google Scholar and expert opinion, their full text obtained, and hand-searched for relevant content to determine eligibility. We used tables of contents and index sections of books to aid this hand searching.

Purposefully selecting literature on conceptual grounds

A final consideration in methods overviews relates to the type of analysis used to generate the review findings. Unlike quantitative systematic reviews where reviewers aim for accurate or unbiased quantitative estimates—something that requires identifying and selecting the literature exhaustively to obtain all relevant data available (i.e., a complete sample)—in methods overviews, reviewers must describe and interpret the relevant literature in qualitative terms to achieve review objectives. In other words, the aim in methods overviews is to seek coverage of the qualitative concepts relevant to the methods topic at hand. For example, in the overview of sampling in qualitative research [ 18 ], achieving review objectives entailed providing conceptual coverage of eight sampling-related topics that emerged as key domains. The following principle recognizes that literature sampling should therefore support generating qualitative conceptual data as the input to analysis.

Principle #4:

Since the analytic findings of a systematic methods overview are generated through qualitative description and interpretation of the literature on a specified topic, selection of the literature should be guided by a purposeful strategy designed to achieve adequate conceptual coverage (i.e., representing an appropriate degree of variation in relevant ideas) of the topic according to objectives of the review.

Strategy #4:

One strategy for choosing the purposeful approach to use in selecting the literature according to the review objectives is to consider whether those objectives imply exploring concepts either at a broad overview level, in which case combining maximum variation selection with a strategy that limits yield (e.g., critical case, politically important, or sampling for influence—described below) may be appropriate; or in depth, in which case purposeful approaches aimed at revealing innovative cases will likely be necessary.

In the methods overview on sampling, the implied scope was broad since we set out to review publications on sampling across three divergent qualitative research traditions—grounded theory, phenomenology, and case study—to facilitate making informative conceptual comparisons. Such an approach would be analogous to maximum variation sampling.

At the same time, the purpose of that review was to critically interrogate the clarity, consistency, and comprehensiveness of literature from these traditions that was “most likely to have widely influenced students’ and researchers’ ideas about sampling” (p. 1774) [ 18 ]. In other words, we explicitly set out to review and critique the most established and influential (and therefore dominant) literature, since this represents a common basis of knowledge among students and researchers seeking understanding or practical guidance on sampling in qualitative research. To achieve this objective, we purposefully sampled publications according to the criterion of influence , which we operationalized as how often an author or publication has been referenced in print or informal discourse. This second sampling approach also limited the literature we needed to consider within our broad scope review to a manageable amount.

To operationalize this strategy of sampling for influence , we sought to identify both the most influential authors within a qualitative research tradition (all of whose citations were subsequently screened) and the most influential publications on the topic of interest by non-influential authors. This involved a flexible approach that combined multiple indicators of influence to avoid the dilemma that any single indicator might provide inadequate coverage. These indicators included bibliometric data (h-index for author influence [ 22 ]; number of cites for publication influence), expert opinion, and cross-references in the literature (i.e., snowball sampling). As a final selection criterion, a publication was included only if it made an original contribution in terms of novel guidance regarding sampling or a related concept; thus, purely secondary sources were excluded. Publish or Perish software (Anne-Wil Harzing; available at http://www.harzing.com/resources/publish-or-perish ) was used to generate bibliometric data via the Google Scholar database. Figure  1 illustrates how identification and selection in the methods overview on sampling was a multi-faceted and iterative process. The authors selected as influential, and the publications selected for inclusion or exclusion are listed in Additional file 1 (Matrices 1, 2a, 2b).

Literature identification and selection process used in the methods overview on sampling [ 18 ]

In summary, the strategies of seeking maximum variation and sampling for influence were employed in the sampling overview to meet the specific review objectives described. Reviewers will need to consider the full range of purposeful literature sampling approaches at their disposal in deciding what best matches the specific aims of their own reviews. Suri [ 10 ] has recently retooled Patton’s well-known typology of purposeful sampling strategies (originally intended for primary research) for application to literature synthesis, providing a useful resource in this respect.

Data abstraction

The purpose of data abstraction in rigorous literature reviews is to locate and record all data relevant to the topic of interest from the full text of included publications, making them available for subsequent analysis. Conventionally, a data abstraction form—consisting of numerous distinct conceptually defined fields to which corresponding information from the source publication is recorded—is developed and employed. There are several challenges, however, to the processes of developing the abstraction form and abstracting the data itself when conducting methods overviews, which we address here. Some of these problems and their solutions may be familiar to those who have conducted qualitative literature syntheses, which are similarly conceptual.

Iteratively defining conceptual information to abstract

In the overview on sampling [ 18 ], while we surveyed multiple sources beforehand to develop a list of concepts relevant for abstraction (e.g., purposeful sampling strategies, saturation, sample size), there was no way for us to anticipate some concepts prior to encountering them in the review process. Indeed, in many cases, reviewers are unable to determine the complete set of methods-related concepts that will be the focus of the final review a priori without having systematically reviewed the publications to be included. Thus, defining what information to abstract beforehand may not be feasible.

Principle #5:

Considering the potential impracticality of defining a complete set of relevant methods-related concepts from a body of literature one has not yet systematically read, selecting and defining fields for data abstraction must often be undertaken iteratively. Thus, concepts to be abstracted can be expected to grow and change as data abstraction proceeds.

Strategy #5:

Reviewers can develop an initial form or set of concepts for abstraction purposes according to standard methods (e.g., incorporating expert feedback, pilot testing) and remain attentive to the need to iteratively revise it as concepts are added or modified during the review. Reviewers should document revisions and return to re-abstract data from previously abstracted publications as the new data requirements are determined.

In the sampling overview [ 18 ], we developed and maintained the abstraction form in Microsoft Word. We derived the initial set of abstraction fields from our own knowledge of relevant sampling-related concepts, consultation with local experts, and reviewing a pilot sample of publications. Since the publications in this review included a large proportion of books, the abstraction process often began by flagging the broad sections within a publication containing topic-relevant information for detailed review to identify text to abstract. When reviewing flagged text, the reviewer occasionally encountered an unanticipated concept significant enough to warrant being added as a new field to the abstraction form. For example, a field was added to capture how authors described the timing of sampling decisions, whether before (a priori) or after (ongoing) starting data collection, or whether this was unclear. In these cases, we systematically documented the modification to the form and returned to previously abstracted publications to abstract any information that might be relevant to the new field.

The logic of this strategy is analogous to the logic used in a form of research synthesis called best fit framework synthesis (BFFS) [ 23 – 25 ]. In that method, reviewers initially code evidence using an a priori framework they have selected. When evidence cannot be accommodated by the selected framework, reviewers then develop new themes or concepts from which they construct a new expanded framework. Both the strategy proposed and the BFFS approach to research synthesis are notable for their rigorous and transparent means to adapt a final set of concepts to the content under review.

Accounting for inconsistent terminology

An important complication affecting the abstraction process in methods overviews is that the language used by authors to describe methods-related concepts can easily vary across publications. For example, authors from different qualitative research traditions often use different terms for similar methods-related concepts. Furthermore, as we found in the sampling overview [ 18 ], there may be cases where no identifiable term, phrase, or label for a methods-related concept is used at all, and a description of it is given instead. This can make searching the text for relevant concepts based on keywords unreliable.

Principle #6:

Since accepted terms may not be used consistently to refer to methods concepts, it is necessary to rely on the definitions for concepts, rather than keywords, to identify relevant information in the publication to abstract.

Strategy #6:

An effective means to systematically identify relevant information is to develop and iteratively adjust written definitions for key concepts (corresponding to abstraction fields) that are consistent with and as inclusive of as much of the literature reviewed as possible. Reviewers then seek information that matches these definitions (rather than keywords) when scanning a publication for relevant data to abstract.

In the abstraction process for the sampling overview [ 18 ], we noted the several concepts of interest to the review for which abstraction by keyword was particularly problematic due to inconsistent terminology across publications: sampling , purposeful sampling , sampling strategy , and saturation (for examples, see Additional file 1 , Matrices 3a, 3b, 4). We iteratively developed definitions for these concepts by abstracting text from publications that either provided an explicit definition or from which an implicit definition could be derived, which was recorded in fields dedicated to the concept’s definition. Using a method of constant comparison, we used text from definition fields to inform and modify a centrally maintained definition of the corresponding concept to optimize its fit and inclusiveness with the literature reviewed. Table  1 shows, as an example, the final definition constructed in this way for one of the central concepts of the review, qualitative sampling .

We applied iteratively developed definitions when making decisions about what specific text to abstract for an existing field, which allowed us to abstract concept-relevant data even if no recognized keyword was used. For example, this was the case for the sampling-related concept, saturation , where the relevant text available for abstraction in one publication [ 26 ]—“to continue to collect data until nothing new was being observed or recorded, no matter how long that takes”—was not accompanied by any term or label whatsoever.

This comparative analytic strategy (and our approach to analysis more broadly as described in strategy #7, below) is analogous to the process of reciprocal translation —a technique first introduced for meta-ethnography by Noblit and Hare [ 27 ] that has since been recognized as a common element in a variety of qualitative metasynthesis approaches [ 28 ]. Reciprocal translation, taken broadly, involves making sense of a study’s findings in terms of the findings of the other studies included in the review. In practice, it has been operationalized in different ways. Melendez-Torres and colleagues developed a typology from their review of the metasynthesis literature, describing four overlapping categories of specific operations undertaken in reciprocal translation: visual representation, key paper integration, data reduction and thematic extraction, and line-by-line coding [ 28 ]. The approaches suggested in both strategies #6 and #7, with their emphasis on constant comparison, appear to fall within the line-by-line coding category.

Generating credible and verifiable analytic interpretations

The analysis in a systematic methods overview must support its more general objective, which we suggested above is often to offer clarity and enhance collective understanding regarding a chosen methods topic. In our experience, this involves describing and interpreting the relevant literature in qualitative terms. Furthermore, any interpretative analysis required may entail reaching different levels of abstraction, depending on the more specific objectives of the review. For example, in the overview on sampling [ 18 ], we aimed to produce a comparative analysis of how multiple sampling-related topics were treated differently within and among different qualitative research traditions. To promote credibility of the review, however, not only should one seek a qualitative analytic approach that facilitates reaching varying levels of abstraction but that approach must also ensure that abstract interpretations are supported and justified by the source data and not solely the product of the analyst’s speculative thinking.

Principle #7:

Considering the qualitative nature of the analysis required in systematic methods overviews, it is important to select an analytic method whose interpretations can be verified as being consistent with the literature selected, regardless of the level of abstraction reached.

Strategy #7:

We suggest employing the constant comparative method of analysis [ 29 ] because it supports developing and verifying analytic links to the source data throughout progressively interpretive or abstract levels. In applying this approach, we advise a rigorous approach, documenting how supportive quotes or references to the original texts are carried forward in the successive steps of analysis to allow for easy verification.

The analytic approach used in the methods overview on sampling [ 18 ] comprised four explicit steps, progressing in level of abstraction—data abstraction, matrices, narrative summaries, and final analytic conclusions (Fig.  2 ). While we have positioned data abstraction as the second stage of the generic review process (prior to Analysis), above, we also considered it as an initial step of analysis in the sampling overview for several reasons. First, it involved a process of constant comparisons and iterative decision-making about the fields to add or define during development and modification of the abstraction form, through which we established the range of concepts to be addressed in the review. At the same time, abstraction involved continuous analytic decisions about what textual quotes (ranging in size from short phrases to numerous paragraphs) to record in the fields thus created. This constant comparative process was analogous to open coding in which textual data from publications was compared to conceptual fields (equivalent to codes) or to other instances of data previously abstracted when constructing definitions to optimize their fit with the overall literature as described in strategy #6. Finally, in the data abstraction step, we also recorded our first interpretive thoughts in dedicated fields, providing initial material for the more abstract analytic steps.

Summary of progressive steps of analysis used in the methods overview on sampling [ 18 ]

In the second step of the analysis, we constructed topic-specific matrices , or tables, by copying relevant quotes from abstraction forms into the appropriate cells of matrices (for the complete set of analytic matrices developed in the sampling review, see Additional file 1 (matrices 3 to 10)). Each matrix ranged from one to five pages; row headings, nested three-deep, identified the methodological tradition, author, and publication, respectively; and column headings identified the concepts, which corresponded to abstraction fields. Matrices thus allowed us to make further comparisons across methodological traditions, and between authors within a tradition. In the third step of analysis, we recorded our comparative observations as narrative summaries , in which we used illustrative quotes more sparingly. In the final step, we developed analytic conclusions based on the narrative summaries about the sampling-related concepts within each methodological tradition for which clarity, consistency, or comprehensiveness of the available guidance appeared to be lacking. Higher levels of analysis thus built logically from the lower levels, enabling us to easily verify analytic conclusions by tracing the support for claims by comparing the original text of publications reviewed.

Integrative versus interpretive methods overviews

The analytic product of systematic methods overviews is comparable to qualitative evidence syntheses, since both involve describing and interpreting the relevant literature in qualitative terms. Most qualitative synthesis approaches strive to produce new conceptual understandings that vary in level of interpretation. Dixon-Woods and colleagues [ 30 ] elaborate on a useful distinction, originating from Noblit and Hare [ 27 ], between integrative and interpretive reviews. Integrative reviews focus on summarizing available primary data and involve using largely secure and well defined concepts to do so; definitions are used from an early stage to specify categories for abstraction (or coding) of data, which in turn supports their aggregation; they do not seek as their primary focus to develop or specify new concepts, although they may achieve some theoretical or interpretive functions. For interpretive reviews, meanwhile, the main focus is to develop new concepts and theories that integrate them, with the implication that the concepts developed become fully defined towards the end of the analysis. These two forms are not completely distinct, and “every integrative synthesis will include elements of interpretation, and every interpretive synthesis will include elements of aggregation of data” [ 30 ].

The example methods overview on sampling [ 18 ] could be classified as predominantly integrative because its primary goal was to aggregate influential authors’ ideas on sampling-related concepts; there were also, however, elements of interpretive synthesis since it aimed to develop new ideas about where clarity in guidance on certain sampling-related topics is lacking, and definitions for some concepts were flexible and not fixed until late in the review. We suggest that most systematic methods overviews will be classifiable as predominantly integrative (aggregative). Nevertheless, more highly interpretive methods overviews are also quite possible—for example, when the review objective is to provide a highly critical analysis for the purpose of generating new methodological guidance. In such cases, reviewers may need to sample more deeply (see strategy #4), specifically by selecting empirical research reports (i.e., to go beyond dominant or influential ideas in the methods literature) that are likely to feature innovations or instructive lessons in employing a given method.

In this paper, we have outlined tentative guidance in the form of seven principles and strategies on how to conduct systematic methods overviews, a review type in which methods-relevant literature is systematically analyzed with the aim of offering clarity and enhancing collective understanding regarding a specific methods topic. Our proposals include strategies for delimiting the set of publications to consider, searching beyond standard bibliographic databases, searching without the availability of relevant metadata, selecting publications on purposeful conceptual grounds, defining concepts and other information to abstract iteratively, accounting for inconsistent terminology, and generating credible and verifiable analytic interpretations. We hope the suggestions proposed will be useful to others undertaking reviews on methods topics in future.

As far as we are aware, this is the first published source of concrete guidance for conducting this type of review. It is important to note that our primary objective was to initiate methodological discussion by stimulating reflection on what rigorous methods for this type of review should look like, leaving the development of more complete guidance to future work. While derived from the experience of reviewing a single qualitative methods topic, we believe the principles and strategies provided are generalizable to overviews of both qualitative and quantitative methods topics alike. However, it is expected that additional challenges and insights for conducting such reviews have yet to be defined. Thus, we propose that next steps for developing more definitive guidance should involve an attempt to collect and integrate other reviewers’ perspectives and experiences in conducting systematic methods overviews on a broad range of qualitative and quantitative methods topics. Formalized guidance and standards would improve the quality of future methods overviews, something we believe has important implications for advancing qualitative and quantitative methodology. When undertaken to a high standard, rigorous critical evaluations of the available methods guidance have significant potential to make implicit controversies explicit, and improve the clarity and precision of our understandings of problematic qualitative or quantitative methods issues.

A review process central to most types of rigorous reviews of empirical studies, which we did not explicitly address in a separate review step above, is quality appraisal . The reason we have not treated this as a separate step stems from the different objectives of the primary publications included in overviews of the methods literature (i.e., providing methodological guidance) compared to the primary publications included in the other established review types (i.e., reporting findings from single empirical studies). This is not to say that appraising quality of the methods literature is not an important concern for systematic methods overviews. Rather, appraisal is much more integral to (and difficult to separate from) the analysis step, in which we advocate appraising clarity, consistency, and comprehensiveness—the quality appraisal criteria that we suggest are appropriate for the methods literature. As a second important difference regarding appraisal, we currently advocate appraising the aforementioned aspects at the level of the literature in aggregate rather than at the level of individual publications. One reason for this is that methods guidance from individual publications generally builds on previous literature, and thus we feel that ahistorical judgments about comprehensiveness of single publications lack relevance and utility. Additionally, while different methods authors may express themselves less clearly than others, their guidance can nonetheless be highly influential and useful, and should therefore not be downgraded or ignored based on considerations of clarity—which raises questions about the alternative uses that quality appraisals of individual publications might have. Finally, legitimate variability in the perspectives that methods authors wish to emphasize, and the levels of generality at which they write about methods, makes critiquing individual publications based on the criterion of clarity a complex and potentially problematic endeavor that is beyond the scope of this paper to address. By appraising the current state of the literature at a holistic level, reviewers stand to identify important gaps in understanding that represent valuable opportunities for further methodological development.

To summarize, the principles and strategies provided here may be useful to those seeking to undertake their own systematic methods overview. Additional work is needed, however, to establish guidance that is comprehensive by comparing the experiences from conducting a variety of methods overviews on a range of methods topics. Efforts that further advance standards for systematic methods overviews have the potential to promote high-quality critical evaluations that produce conceptually clear and unified understandings of problematic methods topics, thereby accelerating the advance of research methodology.

Hutton JL, Ashcroft R. What does “systematic” mean for reviews of methods? In: Black N, Brazier J, Fitzpatrick R, Reeves B, editors. Health services research methods: a guide to best practice. London: BMJ Publishing Group; 1998. p. 249–54.

Google Scholar  

Cochrane handbook for systematic reviews of interventions. In. Edited by Higgins JPT, Green S, Version 5.1.0 edn: The Cochrane Collaboration; 2011.

Centre for Reviews and Dissemination: Systematic reviews: CRD’s guidance for undertaking reviews in health care . York: Centre for Reviews and Dissemination; 2009.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JPA, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ. 2009;339:b2700–0.

Barnett-Page E, Thomas J. Methods for the synthesis of qualitative research: a critical review. BMC Med Res Methodol. 2009;9(1):59.

Article   PubMed   PubMed Central   Google Scholar  

Kastner M, Tricco AC, Soobiah C, Lillie E, Perrier L, Horsley T, Welch V, Cogo E, Antony J, Straus SE. What is the most appropriate knowledge synthesis method to conduct a review? Protocol for a scoping review. BMC Med Res Methodol. 2012;12(1):1–1.

Article   Google Scholar  

Booth A, Noyes J, Flemming K, Gerhardus A. Guidance on choosing qualitative evidence synthesis methods for use in health technology assessments of complex interventions. In: Integrate-HTA. 2016.

Booth A, Sutton A, Papaioannou D. Systematic approaches to successful literature review. 2nd ed. London: Sage; 2016.

Hannes K, Lockwood C. Synthesizing qualitative research: choosing the right approach. Chichester: Wiley-Blackwell; 2012.

Suri H. Towards methodologically inclusive research syntheses: expanding possibilities. New York: Routledge; 2014.

Campbell M, Egan M, Lorenc T, Bond L, Popham F, Fenton C, Benzeval M. Considering methodological options for reviews of theory: illustrated by a review of theories linking income and health. Syst Rev. 2014;3(1):1–11.

Cohen DJ, Crabtree BF. Evaluative criteria for qualitative research in health care: controversies and recommendations. Ann Fam Med. 2008;6(4):331–9.

Tong A, Sainsbury P, Craig J. Consolidated criteria for reportingqualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57.

Article   PubMed   Google Scholar  

Moher D, Schulz KF, Simera I, Altman DG. Guidance for developers of health research reporting guidelines. PLoS Med. 2010;7(2):e1000217.

Moher D, Tetzlaff J, Tricco AC, Sampson M, Altman DG. Epidemiology and reporting characteristics of systematic reviews. PLoS Med. 2007;4(3):e78.

Chan AW, Altman DG. Epidemiology and reporting of randomised trials published in PubMed journals. Lancet. 2005;365(9465):1159–62.

Alshurafa M, Briel M, Akl EA, Haines T, Moayyedi P, Gentles SJ, Rios L, Tran C, Bhatnagar N, Lamontagne F, et al. Inconsistent definitions for intention-to-treat in relation to missing outcome data: systematic review of the methods literature. PLoS One. 2012;7(11):e49163.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Gentles SJ, Charles C, Ploeg J, McKibbon KA. Sampling in qualitative research: insights from an overview of the methods literature. Qual Rep. 2015;20(11):1772–89.

Harzing A-W, Alakangas S. Google Scholar, Scopus and the Web of Science: a longitudinal and cross-disciplinary comparison. Scientometrics. 2016;106(2):787–804.

Harzing A-WK, van der Wal R. Google Scholar as a new source for citation analysis. Ethics Sci Environ Polit. 2008;8(1):61–73.

Kousha K, Thelwall M. Google Scholar citations and Google Web/URL citations: a multi‐discipline exploratory analysis. J Assoc Inf Sci Technol. 2007;58(7):1055–65.

Hirsch JE. An index to quantify an individual’s scientific research output. Proc Natl Acad Sci U S A. 2005;102(46):16569–72.

Booth A, Carroll C. How to build up the actionable knowledge base: the role of ‘best fit’ framework synthesis for studies of improvement in healthcare. BMJ Quality Safety. 2015;24(11):700–8.

Carroll C, Booth A, Leaviss J, Rick J. “Best fit” framework synthesis: refining the method. BMC Med Res Methodol. 2013;13(1):37.

Carroll C, Booth A, Cooper K. A worked example of “best fit” framework synthesis: a systematic review of views concerning the taking of some potential chemopreventive agents. BMC Med Res Methodol. 2011;11(1):29.

Cohen MZ, Kahn DL, Steeves DL. Hermeneutic phenomenological research: a practical guide for nurse researchers. Thousand Oaks: Sage; 2000.

Noblit GW, Hare RD. Meta-ethnography: synthesizing qualitative studies. Newbury Park: Sage; 1988.

Book   Google Scholar  

Melendez-Torres GJ, Grant S, Bonell C. A systematic review and critical appraisal of qualitative metasynthetic practice in public health to develop a taxonomy of operations of reciprocal translation. Res Synthesis Methods. 2015;6(4):357–71.

Article   CAS   Google Scholar  

Glaser BG, Strauss A. The discovery of grounded theory. Chicago: Aldine; 1967.

Dixon-Woods M, Agarwal S, Young B, Jones D, Sutton A. Integrative approaches to qualitative and quantitative evidence. In: UK National Health Service. 2004. p. 1–44.

Download references

Acknowledgements

Not applicable.

There was no funding for this work.

Availability of data and materials

The systematic methods overview used as a worked example in this article (Gentles SJ, Charles C, Ploeg J, McKibbon KA: Sampling in qualitative research: insights from an overview of the methods literature. The Qual Rep 2015, 20(11):1772-1789) is available from http://nsuworks.nova.edu/tqr/vol20/iss11/5 .

Authors’ contributions

SJG wrote the first draft of this article, with CC contributing to drafting. All authors contributed to revising the manuscript. All authors except CC (deceased) approved the final draft. SJG, CC, KAB, and JP were involved in developing methods for the systematic methods overview on sampling.

Authors’ information

Competing interests.

The authors declare that they have no competing interests.

Consent for publication

Ethics approval and consent to participate, author information, authors and affiliations.

Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada

Stephen J. Gentles, Cathy Charles & K. Ann McKibbon

Faculty of Social Work, University of Calgary, Alberta, Canada

David B. Nicholas

School of Nursing, McMaster University, Hamilton, Ontario, Canada

Jenny Ploeg

CanChild Centre for Childhood Disability Research, McMaster University, 1400 Main Street West, IAHS 408, Hamilton, ON, L8S 1C7, Canada

Stephen J. Gentles

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Stephen J. Gentles .

Additional information

Cathy Charles is deceased

Additional file

Additional file 1:.

Submitted: Analysis_matrices. (DOC 330 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Gentles, S.J., Charles, C., Nicholas, D.B. et al. Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research. Syst Rev 5 , 172 (2016). https://doi.org/10.1186/s13643-016-0343-0

Download citation

Received : 06 June 2016

Accepted : 14 September 2016

Published : 11 October 2016

DOI : https://doi.org/10.1186/s13643-016-0343-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Systematic review
  • Literature selection
  • Research methods
  • Research methodology
  • Overview of methods
  • Systematic methods overview
  • Review methods

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

articles related to research methodology

Reference management. Clean and simple.

What is research methodology?

articles related to research methodology

The basics of research methodology

Why do you need a research methodology, what needs to be included, why do you need to document your research method, what are the different types of research instruments, qualitative / quantitative / mixed research methodologies, how do you choose the best research methodology for you, frequently asked questions about research methodology, related articles.

When you’re working on your first piece of academic research, there are many different things to focus on, and it can be overwhelming to stay on top of everything. This is especially true of budding or inexperienced researchers.

If you’ve never put together a research proposal before or find yourself in a position where you need to explain your research methodology decisions, there are a few things you need to be aware of.

Once you understand the ins and outs, handling academic research in the future will be less intimidating. We break down the basics below:

A research methodology encompasses the way in which you intend to carry out your research. This includes how you plan to tackle things like collection methods, statistical analysis, participant observations, and more.

You can think of your research methodology as being a formula. One part will be how you plan on putting your research into practice, and another will be why you feel this is the best way to approach it. Your research methodology is ultimately a methodological and systematic plan to resolve your research problem.

In short, you are explaining how you will take your idea and turn it into a study, which in turn will produce valid and reliable results that are in accordance with the aims and objectives of your research. This is true whether your paper plans to make use of qualitative methods or quantitative methods.

The purpose of a research methodology is to explain the reasoning behind your approach to your research - you'll need to support your collection methods, methods of analysis, and other key points of your work.

Think of it like writing a plan or an outline for you what you intend to do.

When carrying out research, it can be easy to go off-track or depart from your standard methodology.

Tip: Having a methodology keeps you accountable and on track with your original aims and objectives, and gives you a suitable and sound plan to keep your project manageable, smooth, and effective.

With all that said, how do you write out your standard approach to a research methodology?

As a general plan, your methodology should include the following information:

  • Your research method.  You need to state whether you plan to use quantitative analysis, qualitative analysis, or mixed-method research methods. This will often be determined by what you hope to achieve with your research.
  • Explain your reasoning. Why are you taking this methodological approach? Why is this particular methodology the best way to answer your research problem and achieve your objectives?
  • Explain your instruments.  This will mainly be about your collection methods. There are varying instruments to use such as interviews, physical surveys, questionnaires, for example. Your methodology will need to detail your reasoning in choosing a particular instrument for your research.
  • What will you do with your results?  How are you going to analyze the data once you have gathered it?
  • Advise your reader.  If there is anything in your research methodology that your reader might be unfamiliar with, you should explain it in more detail. For example, you should give any background information to your methods that might be relevant or provide your reasoning if you are conducting your research in a non-standard way.
  • How will your sampling process go?  What will your sampling procedure be and why? For example, if you will collect data by carrying out semi-structured or unstructured interviews, how will you choose your interviewees and how will you conduct the interviews themselves?
  • Any practical limitations?  You should discuss any limitations you foresee being an issue when you’re carrying out your research.

In any dissertation, thesis, or academic journal, you will always find a chapter dedicated to explaining the research methodology of the person who carried out the study, also referred to as the methodology section of the work.

A good research methodology will explain what you are going to do and why, while a poor methodology will lead to a messy or disorganized approach.

You should also be able to justify in this section your reasoning for why you intend to carry out your research in a particular way, especially if it might be a particularly unique method.

Having a sound methodology in place can also help you with the following:

  • When another researcher at a later date wishes to try and replicate your research, they will need your explanations and guidelines.
  • In the event that you receive any criticism or questioning on the research you carried out at a later point, you will be able to refer back to it and succinctly explain the how and why of your approach.
  • It provides you with a plan to follow throughout your research. When you are drafting your methodology approach, you need to be sure that the method you are using is the right one for your goal. This will help you with both explaining and understanding your method.
  • It affords you the opportunity to document from the outset what you intend to achieve with your research, from start to finish.

A research instrument is a tool you will use to help you collect, measure and analyze the data you use as part of your research.

The choice of research instrument will usually be yours to make as the researcher and will be whichever best suits your methodology.

There are many different research instruments you can use in collecting data for your research.

Generally, they can be grouped as follows:

  • Interviews (either as a group or one-on-one). You can carry out interviews in many different ways. For example, your interview can be structured, semi-structured, or unstructured. The difference between them is how formal the set of questions is that is asked of the interviewee. In a group interview, you may choose to ask the interviewees to give you their opinions or perceptions on certain topics.
  • Surveys (online or in-person). In survey research, you are posing questions in which you ask for a response from the person taking the survey. You may wish to have either free-answer questions such as essay-style questions, or you may wish to use closed questions such as multiple choice. You may even wish to make the survey a mixture of both.
  • Focus Groups.  Similar to the group interview above, you may wish to ask a focus group to discuss a particular topic or opinion while you make a note of the answers given.
  • Observations.  This is a good research instrument to use if you are looking into human behaviors. Different ways of researching this include studying the spontaneous behavior of participants in their everyday life, or something more structured. A structured observation is research conducted at a set time and place where researchers observe behavior as planned and agreed upon with participants.

These are the most common ways of carrying out research, but it is really dependent on your needs as a researcher and what approach you think is best to take.

It is also possible to combine a number of research instruments if this is necessary and appropriate in answering your research problem.

There are three different types of methodologies, and they are distinguished by whether they focus on words, numbers, or both.

Data typeWhat is it?Methodology

Quantitative

This methodology focuses more on measuring and testing numerical data. What is the aim of quantitative research?

When using this form of research, your objective will usually be to confirm something.

Surveys, tests, existing databases.

For example, you may use this type of methodology if you are looking to test a set of hypotheses.

Qualitative

Qualitative research is a process of collecting and analyzing both words and textual data.

This form of research methodology is sometimes used where the aim and objective of the research are exploratory.

Observations, interviews, focus groups.

Exploratory research might be used where you are trying to understand human actions i.e. for a study in the sociology or psychology field.

Mixed-method

A mixed-method approach combines both of the above approaches.

The quantitative approach will provide you with some definitive facts and figures, whereas the qualitative methodology will provide your research with an interesting human aspect.

Where you can use a mixed method of research, this can produce some incredibly interesting results. This is due to testing in a way that provides data that is both proven to be exact while also being exploratory at the same time.

➡️ Want to learn more about the differences between qualitative and quantitative research, and how to use both methods? Check out our guide for that!

If you've done your due diligence, you'll have an idea of which methodology approach is best suited to your research.

It’s likely that you will have carried out considerable reading and homework before you reach this point and you may have taken inspiration from other similar studies that have yielded good results.

Still, it is important to consider different options before setting your research in stone. Exploring different options available will help you to explain why the choice you ultimately make is preferable to other methods.

If proving your research problem requires you to gather large volumes of numerical data to test hypotheses, a quantitative research method is likely to provide you with the most usable results.

If instead you’re looking to try and learn more about people, and their perception of events, your methodology is more exploratory in nature and would therefore probably be better served using a qualitative research methodology.

It helps to always bring things back to the question: what do I want to achieve with my research?

Once you have conducted your research, you need to analyze it. Here are some helpful guides for qualitative data analysis:

➡️  How to do a content analysis

➡️  How to do a thematic analysis

➡️  How to do a rhetorical analysis

Research methodology refers to the techniques used to find and analyze information for a study, ensuring that the results are valid, reliable and that they address the research objective.

Data can typically be organized into four different categories or methods: observational, experimental, simulation, and derived.

Writing a methodology section is a process of introducing your methods and instruments, discussing your analysis, providing more background information, addressing your research limitations, and more.

Your research methodology section will need a clear research question and proposed research approach. You'll need to add a background, introduce your research question, write your methodology and add the works you cited during your data collecting phase.

The research methodology section of your study will indicate how valid your findings are and how well-informed your paper is. It also assists future researchers planning to use the same methodology, who want to cite your study or replicate it.

Rhetorical analysis illustration

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Dissertation
  • What Is a Research Methodology? | Steps & Tips

What Is a Research Methodology? | Steps & Tips

Published on August 25, 2022 by Shona McCombes and Tegan George. Revised on November 20, 2023.

Your research methodology discusses and explains the data collection and analysis methods you used in your research. A key part of your thesis, dissertation , or research paper , the methodology chapter explains what you did and how you did it, allowing readers to evaluate the reliability and validity of your research and your dissertation topic .

It should include:

  • The type of research you conducted
  • How you collected and analyzed your data
  • Any tools or materials you used in the research
  • How you mitigated or avoided research biases
  • Why you chose these methods
  • Your methodology section should generally be written in the past tense .
  • Academic style guides in your field may provide detailed guidelines on what to include for different types of studies.
  • Your citation style might provide guidelines for your methodology section (e.g., an APA Style methods section ).

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

How to write a research methodology, why is a methods section important, step 1: explain your methodological approach, step 2: describe your data collection methods, step 3: describe your analysis method, step 4: evaluate and justify the methodological choices you made, tips for writing a strong methodology chapter, other interesting articles, frequently asked questions about methodology.

Prevent plagiarism. Run a free check.

Your methods section is your opportunity to share how you conducted your research and why you chose the methods you chose. It’s also the place to show that your research was rigorously conducted and can be replicated .

It gives your research legitimacy and situates it within your field, and also gives your readers a place to refer to if they have any questions or critiques in other sections.

You can start by introducing your overall approach to your research. You have two options here.

Option 1: Start with your “what”

What research problem or question did you investigate?

  • Aim to describe the characteristics of something?
  • Explore an under-researched topic?
  • Establish a causal relationship?

And what type of data did you need to achieve this aim?

  • Quantitative data , qualitative data , or a mix of both?
  • Primary data collected yourself, or secondary data collected by someone else?
  • Experimental data gathered by controlling and manipulating variables, or descriptive data gathered via observations?

Option 2: Start with your “why”

Depending on your discipline, you can also start with a discussion of the rationale and assumptions underpinning your methodology. In other words, why did you choose these methods for your study?

  • Why is this the best way to answer your research question?
  • Is this a standard methodology in your field, or does it require justification?
  • Were there any ethical considerations involved in your choices?
  • What are the criteria for validity and reliability in this type of research ? How did you prevent bias from affecting your data?

Once you have introduced your reader to your methodological approach, you should share full details about your data collection methods .

Quantitative methods

In order to be considered generalizable, you should describe quantitative research methods in enough detail for another researcher to replicate your study.

Here, explain how you operationalized your concepts and measured your variables. Discuss your sampling method or inclusion and exclusion criteria , as well as any tools, procedures, and materials you used to gather your data.

Surveys Describe where, when, and how the survey was conducted.

  • How did you design the questionnaire?
  • What form did your questions take (e.g., multiple choice, Likert scale )?
  • Were your surveys conducted in-person or virtually?
  • What sampling method did you use to select participants?
  • What was your sample size and response rate?

Experiments Share full details of the tools, techniques, and procedures you used to conduct your experiment.

  • How did you design the experiment ?
  • How did you recruit participants?
  • How did you manipulate and measure the variables ?
  • What tools did you use?

Existing data Explain how you gathered and selected the material (such as datasets or archival data) that you used in your analysis.

  • Where did you source the material?
  • How was the data originally produced?
  • What criteria did you use to select material (e.g., date range)?

The survey consisted of 5 multiple-choice questions and 10 questions measured on a 7-point Likert scale.

The goal was to collect survey responses from 350 customers visiting the fitness apparel company’s brick-and-mortar location in Boston on July 4–8, 2022, between 11:00 and 15:00.

Here, a customer was defined as a person who had purchased a product from the company on the day they took the survey. Participants were given 5 minutes to fill in the survey anonymously. In total, 408 customers responded, but not all surveys were fully completed. Due to this, 371 survey results were included in the analysis.

  • Information bias
  • Omitted variable bias
  • Regression to the mean
  • Survivorship bias
  • Undercoverage bias
  • Sampling bias

Qualitative methods

In qualitative research , methods are often more flexible and subjective. For this reason, it’s crucial to robustly explain the methodology choices you made.

Be sure to discuss the criteria you used to select your data, the context in which your research was conducted, and the role you played in collecting your data (e.g., were you an active participant, or a passive observer?)

Interviews or focus groups Describe where, when, and how the interviews were conducted.

  • How did you find and select participants?
  • How many participants took part?
  • What form did the interviews take ( structured , semi-structured , or unstructured )?
  • How long were the interviews?
  • How were they recorded?

Participant observation Describe where, when, and how you conducted the observation or ethnography .

  • What group or community did you observe? How long did you spend there?
  • How did you gain access to this group? What role did you play in the community?
  • How long did you spend conducting the research? Where was it located?
  • How did you record your data (e.g., audiovisual recordings, note-taking)?

Existing data Explain how you selected case study materials for your analysis.

  • What type of materials did you analyze?
  • How did you select them?

In order to gain better insight into possibilities for future improvement of the fitness store’s product range, semi-structured interviews were conducted with 8 returning customers.

Here, a returning customer was defined as someone who usually bought products at least twice a week from the store.

Surveys were used to select participants. Interviews were conducted in a small office next to the cash register and lasted approximately 20 minutes each. Answers were recorded by note-taking, and seven interviews were also filmed with consent. One interviewee preferred not to be filmed.

  • The Hawthorne effect
  • Observer bias
  • The placebo effect
  • Response bias and Nonresponse bias
  • The Pygmalion effect
  • Recall bias
  • Social desirability bias
  • Self-selection bias

Mixed methods

Mixed methods research combines quantitative and qualitative approaches. If a standalone quantitative or qualitative study is insufficient to answer your research question, mixed methods may be a good fit for you.

Mixed methods are less common than standalone analyses, largely because they require a great deal of effort to pull off successfully. If you choose to pursue mixed methods, it’s especially important to robustly justify your methods.

Don't submit your assignments before you do this

The academic proofreading tool has been trained on 1000s of academic texts. Making it the most accurate and reliable proofreading tool for students. Free citation check included.

articles related to research methodology

Try for free

Next, you should indicate how you processed and analyzed your data. Avoid going into too much detail: you should not start introducing or discussing any of your results at this stage.

In quantitative research , your analysis will be based on numbers. In your methods section, you can include:

  • How you prepared the data before analyzing it (e.g., checking for missing data , removing outliers , transforming variables)
  • Which software you used (e.g., SPSS, Stata or R)
  • Which statistical tests you used (e.g., two-tailed t test , simple linear regression )

In qualitative research, your analysis will be based on language, images, and observations (often involving some form of textual analysis ).

Specific methods might include:

  • Content analysis : Categorizing and discussing the meaning of words, phrases and sentences
  • Thematic analysis : Coding and closely examining the data to identify broad themes and patterns
  • Discourse analysis : Studying communication and meaning in relation to their social context

Mixed methods combine the above two research methods, integrating both qualitative and quantitative approaches into one coherent analytical process.

Above all, your methodology section should clearly make the case for why you chose the methods you did. This is especially true if you did not take the most standard approach to your topic. In this case, discuss why other methods were not suitable for your objectives, and show how this approach contributes new knowledge or understanding.

In any case, it should be overwhelmingly clear to your reader that you set yourself up for success in terms of your methodology’s design. Show how your methods should lead to results that are valid and reliable, while leaving the analysis of the meaning, importance, and relevance of your results for your discussion section .

  • Quantitative: Lab-based experiments cannot always accurately simulate real-life situations and behaviors, but they are effective for testing causal relationships between variables .
  • Qualitative: Unstructured interviews usually produce results that cannot be generalized beyond the sample group , but they provide a more in-depth understanding of participants’ perceptions, motivations, and emotions.
  • Mixed methods: Despite issues systematically comparing differing types of data, a solely quantitative study would not sufficiently incorporate the lived experience of each participant, while a solely qualitative study would be insufficiently generalizable.

Remember that your aim is not just to describe your methods, but to show how and why you applied them. Again, it’s critical to demonstrate that your research was rigorously conducted and can be replicated.

1. Focus on your objectives and research questions

The methodology section should clearly show why your methods suit your objectives and convince the reader that you chose the best possible approach to answering your problem statement and research questions .

2. Cite relevant sources

Your methodology can be strengthened by referencing existing research in your field. This can help you to:

  • Show that you followed established practice for your type of research
  • Discuss how you decided on your approach by evaluating existing research
  • Present a novel methodological approach to address a gap in the literature

3. Write for your audience

Consider how much information you need to give, and avoid getting too lengthy. If you are using methods that are standard for your discipline, you probably don’t need to give a lot of background or justification.

Regardless, your methodology should be a clear, well-structured text that makes an argument for your approach, not just a list of technical details and procedures.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Measures of central tendency
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles

Methodology

  • Cluster sampling
  • Stratified sampling
  • Thematic analysis
  • Cohort study
  • Peer review
  • Ethnography

Research bias

  • Implicit bias
  • Cognitive bias
  • Conformity bias
  • Hawthorne effect
  • Availability heuristic
  • Attrition bias

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

In a scientific paper, the methodology always comes after the introduction and before the results , discussion and conclusion . The same basic structure also applies to a thesis, dissertation , or research proposal .

Depending on the length and type of document, you might also include a literature review or theoretical framework before the methodology.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research, you also have to consider the internal and external validity of your experiment.

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. & George, T. (2023, November 20). What Is a Research Methodology? | Steps & Tips. Scribbr. Retrieved August 8, 2024, from https://www.scribbr.com/dissertation/methodology/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, what is a theoretical framework | guide to organizing, what is a research design | types, guide & examples, qualitative vs. quantitative research | differences, examples & methods, what is your plagiarism score.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Res Metr Anal

Logo of frontrma

The Use of Research Methods in Psychological Research: A Systematised Review

Salomé elizabeth scholtz.

1 Community Psychosocial Research (COMPRES), School of Psychosocial Health, North-West University, Potchefstroom, South Africa

Werner de Klerk

Leon t. de beer.

2 WorkWell Research Institute, North-West University, Potchefstroom, South Africa

Research methods play an imperative role in research quality as well as educating young researchers, however, the application thereof is unclear which can be detrimental to the field of psychology. Therefore, this systematised review aimed to determine what research methods are being used, how these methods are being used and for what topics in the field. Our review of 999 articles from five journals over a period of 5 years indicated that psychology research is conducted in 10 topics via predominantly quantitative research methods. Of these 10 topics, social psychology was the most popular. The remainder of the conducted methodology is described. It was also found that articles lacked rigour and transparency in the used methodology which has implications for replicability. In conclusion this article, provides an overview of all reported methodologies used in a sample of psychology journals. It highlights the popularity and application of methods and designs throughout the article sample as well as an unexpected lack of rigour with regard to most aspects of methodology. Possible sample bias should be considered when interpreting the results of this study. It is recommended that future research should utilise the results of this study to determine the possible impact on the field of psychology as a science and to further investigation into the use of research methods. Results should prompt the following future research into: a lack or rigour and its implication on replication, the use of certain methods above others, publication bias and choice of sampling method.

Introduction

Psychology is an ever-growing and popular field (Gough and Lyons, 2016 ; Clay, 2017 ). Due to this growth and the need for science-based research to base health decisions on (Perestelo-Pérez, 2013 ), the use of research methods in the broad field of psychology is an essential point of investigation (Stangor, 2011 ; Aanstoos, 2014 ). Research methods are therefore viewed as important tools used by researchers to collect data (Nieuwenhuis, 2016 ) and include the following: quantitative, qualitative, mixed method and multi method (Maree, 2016 ). Additionally, researchers also employ various types of literature reviews to address research questions (Grant and Booth, 2009 ). According to literature, what research method is used and why a certain research method is used is complex as it depends on various factors that may include paradigm (O'Neil and Koekemoer, 2016 ), research question (Grix, 2002 ), or the skill and exposure of the researcher (Nind et al., 2015 ). How these research methods are employed is also difficult to discern as research methods are often depicted as having fixed boundaries that are continuously crossed in research (Johnson et al., 2001 ; Sandelowski, 2011 ). Examples of this crossing include adding quantitative aspects to qualitative studies (Sandelowski et al., 2009 ), or stating that a study used a mixed-method design without the study having any characteristics of this design (Truscott et al., 2010 ).

The inappropriate use of research methods affects how students and researchers improve and utilise their research skills (Scott Jones and Goldring, 2015 ), how theories are developed (Ngulube, 2013 ), and the credibility of research results (Levitt et al., 2017 ). This, in turn, can be detrimental to the field (Nind et al., 2015 ), journal publication (Ketchen et al., 2008 ; Ezeh et al., 2010 ), and attempts to address public social issues through psychological research (Dweck, 2017 ). This is especially important given the now well-known replication crisis the field is facing (Earp and Trafimow, 2015 ; Hengartner, 2018 ).

Due to this lack of clarity on method use and the potential impact of inept use of research methods, the aim of this study was to explore the use of research methods in the field of psychology through a review of journal publications. Chaichanasakul et al. ( 2011 ) identify reviewing articles as the opportunity to examine the development, growth and progress of a research area and overall quality of a journal. Studies such as Lee et al. ( 1999 ) as well as Bluhm et al. ( 2011 ) review of qualitative methods has attempted to synthesis the use of research methods and indicated the growth of qualitative research in American and European journals. Research has also focused on the use of research methods in specific sub-disciplines of psychology, for example, in the field of Industrial and Organisational psychology Coetzee and Van Zyl ( 2014 ) found that South African publications tend to consist of cross-sectional quantitative research methods with underrepresented longitudinal studies. Qualitative studies were found to make up 21% of the articles published from 1995 to 2015 in a similar study by O'Neil and Koekemoer ( 2016 ). Other methods in health psychology, such as Mixed methods research have also been reportedly growing in popularity (O'Cathain, 2009 ).

A broad overview of the use of research methods in the field of psychology as a whole is however, not available in the literature. Therefore, our research focused on answering what research methods are being used, how these methods are being used and for what topics in practice (i.e., journal publications) in order to provide a general perspective of method used in psychology publication. We synthesised the collected data into the following format: research topic [areas of scientific discourse in a field or the current needs of a population (Bittermann and Fischer, 2018 )], method [data-gathering tools (Nieuwenhuis, 2016 )], sampling [elements chosen from a population to partake in research (Ritchie et al., 2009 )], data collection [techniques and research strategy (Maree, 2016 )], and data analysis [discovering information by examining bodies of data (Ktepi, 2016 )]. A systematised review of recent articles (2013 to 2017) collected from five different journals in the field of psychological research was conducted.

Grant and Booth ( 2009 ) describe systematised reviews as the review of choice for post-graduate studies, which is employed using some elements of a systematic review and seldom more than one or two databases to catalogue studies after a comprehensive literature search. The aspects used in this systematised review that are similar to that of a systematic review were a full search within the chosen database and data produced in tabular form (Grant and Booth, 2009 ).

Sample sizes and timelines vary in systematised reviews (see Lowe and Moore, 2014 ; Pericall and Taylor, 2014 ; Barr-Walker, 2017 ). With no clear parameters identified in the literature (see Grant and Booth, 2009 ), the sample size of this study was determined by the purpose of the sample (Strydom, 2011 ), and time and cost constraints (Maree and Pietersen, 2016 ). Thus, a non-probability purposive sample (Ritchie et al., 2009 ) of the top five psychology journals from 2013 to 2017 was included in this research study. Per Lee ( 2015 ) American Psychological Association (APA) recommends the use of the most up-to-date sources for data collection with consideration of the context of the research study. As this research study focused on the most recent trends in research methods used in the broad field of psychology, the identified time frame was deemed appropriate.

Psychology journals were only included if they formed part of the top five English journals in the miscellaneous psychology domain of the Scimago Journal and Country Rank (Scimago Journal & Country Rank, 2017 ). The Scimago Journal and Country Rank provides a yearly updated list of publicly accessible journal and country-specific indicators derived from the Scopus® database (Scopus, 2017b ) by means of the Scimago Journal Rank (SJR) indicator developed by Scimago from the algorithm Google PageRank™ (Scimago Journal & Country Rank, 2017 ). Scopus is the largest global database of abstracts and citations from peer-reviewed journals (Scopus, 2017a ). Reasons for the development of the Scimago Journal and Country Rank list was to allow researchers to assess scientific domains, compare country rankings, and compare and analyse journals (Scimago Journal & Country Rank, 2017 ), which supported the aim of this research study. Additionally, the goals of the journals had to focus on topics in psychology in general with no preference to specific research methods and have full-text access to articles.

The following list of top five journals in 2018 fell within the abovementioned inclusion criteria (1) Australian Journal of Psychology, (2) British Journal of Psychology, (3) Europe's Journal of Psychology, (4) International Journal of Psychology and lastly the (5) Journal of Psychology Applied and Interdisciplinary.

Journals were excluded from this systematised review if no full-text versions of their articles were available, if journals explicitly stated a publication preference for certain research methods, or if the journal only published articles in a specific discipline of psychological research (for example, industrial psychology, clinical psychology etc.).

The researchers followed a procedure (see Figure 1 ) adapted from that of Ferreira et al. ( 2016 ) for systematised reviews. Data collection and categorisation commenced on 4 December 2017 and continued until 30 June 2019. All the data was systematically collected and coded manually (Grant and Booth, 2009 ) with an independent person acting as co-coder. Codes of interest included the research topic, method used, the design used, sampling method, and methodology (the method used for data collection and data analysis). These codes were derived from the wording in each article. Themes were created based on the derived codes and checked by the co-coder. Lastly, these themes were catalogued into a table as per the systematised review design.

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0001.jpg

Systematised review procedure.

According to Johnston et al. ( 2019 ), “literature screening, selection, and data extraction/analyses” (p. 7) are specifically tailored to the aim of a review. Therefore, the steps followed in a systematic review must be reported in a comprehensive and transparent manner. The chosen systematised design adhered to the rigour expected from systematic reviews with regard to full search and data produced in tabular form (Grant and Booth, 2009 ). The rigorous application of the systematic review is, therefore discussed in relation to these two elements.

Firstly, to ensure a comprehensive search, this research study promoted review transparency by following a clear protocol outlined according to each review stage before collecting data (Johnston et al., 2019 ). This protocol was similar to that of Ferreira et al. ( 2016 ) and approved by three research committees/stakeholders and the researchers (Johnston et al., 2019 ). The eligibility criteria for article inclusion was based on the research question and clearly stated, and the process of inclusion was recorded on an electronic spreadsheet to create an evidence trail (Bandara et al., 2015 ; Johnston et al., 2019 ). Microsoft Excel spreadsheets are a popular tool for review studies and can increase the rigour of the review process (Bandara et al., 2015 ). Screening for appropriate articles for inclusion forms an integral part of a systematic review process (Johnston et al., 2019 ). This step was applied to two aspects of this research study: the choice of eligible journals and articles to be included. Suitable journals were selected by the first author and reviewed by the second and third authors. Initially, all articles from the chosen journals were included. Then, by process of elimination, those irrelevant to the research aim, i.e., interview articles or discussions etc., were excluded.

To ensure rigourous data extraction, data was first extracted by one reviewer, and an independent person verified the results for completeness and accuracy (Johnston et al., 2019 ). The research question served as a guide for efficient, organised data extraction (Johnston et al., 2019 ). Data was categorised according to the codes of interest, along with article identifiers for audit trails such as authors, title and aims of articles. The categorised data was based on the aim of the review (Johnston et al., 2019 ) and synthesised in tabular form under methods used, how these methods were used, and for what topics in the field of psychology.

The initial search produced a total of 1,145 articles from the 5 journals identified. Inclusion and exclusion criteria resulted in a final sample of 999 articles ( Figure 2 ). Articles were co-coded into 84 codes, from which 10 themes were derived ( Table 1 ).

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0002.jpg

Journal article frequency.

Codes used to form themes (research topics).

Social Psychology31Aggression SP, Attitude SP, Belief SP, Child abuse SP, Conflict SP, Culture SP, Discrimination SP, Economic, Family illness, Family, Group, Help, Immigration, Intergeneration, Judgement, Law, Leadership, Marriage SP, Media, Optimism, Organisational and Social justice, Parenting SP, Politics, Prejudice, Relationships, Religion, Romantic Relationships SP, Sex and attraction, Stereotype, Violence, Work
Experimental Psychology17Anxiety, stress and PTSD, Coping, Depression, Emotion, Empathy, Facial research, Fear and threat, Happiness, Humor, Mindfulness, Mortality, Motivation and Achievement, Perception, Rumination, Self, Self-efficacy
Cognitive Psychology12Attention, Cognition, Decision making, Impulse, Intelligence, Language, Math, Memory, Mental, Number, Problem solving, Reading
Health Psychology7Addiction, Body, Burnout, Health, Illness (Health Psychology), Sleep (Health Psychology), Suicide and Self-harm
Physiological Psychology6Gender, Health (Physiological psychology), Illness (Physiological psychology), Mood disorders, Sleep (Physiological psychology), Visual research
Developmental Psychology3Attachment, Development, Old age
Personality3Machiavellian, Narcissism, Personality
Psychological Psychology3Programme, Psychology practice, Theory
Education and Learning1Education and Learning
Psychometrics1Measure
Code Total84

These 10 themes represent the topic section of our research question ( Figure 3 ). All these topics except, for the final one, psychological practice , were found to concur with the research areas in psychology as identified by Weiten ( 2010 ). These research areas were chosen to represent the derived codes as they provided broad definitions that allowed for clear, concise categorisation of the vast amount of data. Article codes were categorised under particular themes/topics if they adhered to the research area definitions created by Weiten ( 2010 ). It is important to note that these areas of research do not refer to specific disciplines in psychology, such as industrial psychology; but to broader fields that may encompass sub-interests of these disciplines.

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0003.jpg

Topic frequency (international sample).

In the case of developmental psychology , researchers conduct research into human development from childhood to old age. Social psychology includes research on behaviour governed by social drivers. Researchers in the field of educational psychology study how people learn and the best way to teach them. Health psychology aims to determine the effect of psychological factors on physiological health. Physiological psychology , on the other hand, looks at the influence of physiological aspects on behaviour. Experimental psychology is not the only theme that uses experimental research and focuses on the traditional core topics of psychology (for example, sensation). Cognitive psychology studies the higher mental processes. Psychometrics is concerned with measuring capacity or behaviour. Personality research aims to assess and describe consistency in human behaviour (Weiten, 2010 ). The final theme of psychological practice refers to the experiences, techniques, and interventions employed by practitioners, researchers, and academia in the field of psychology.

Articles under these themes were further subdivided into methodologies: method, sampling, design, data collection, and data analysis. The categorisation was based on information stated in the articles and not inferred by the researchers. Data were compiled into two sets of results presented in this article. The first set addresses the aim of this study from the perspective of the topics identified. The second set of results represents a broad overview of the results from the perspective of the methodology employed. The second set of results are discussed in this article, while the first set is presented in table format. The discussion thus provides a broad overview of methods use in psychology (across all themes), while the table format provides readers with in-depth insight into methods used in the individual themes identified. We believe that presenting the data from both perspectives allow readers a broad understanding of the results. Due a large amount of information that made up our results, we followed Cichocka and Jost ( 2014 ) in simplifying our results. Please note that the numbers indicated in the table in terms of methodology differ from the total number of articles. Some articles employed more than one method/sampling technique/design/data collection method/data analysis in their studies.

What follows is the results for what methods are used, how these methods are used, and which topics in psychology they are applied to . Percentages are reported to the second decimal in order to highlight small differences in the occurrence of methodology.

Firstly, with regard to the research methods used, our results show that researchers are more likely to use quantitative research methods (90.22%) compared to all other research methods. Qualitative research was the second most common research method but only made up about 4.79% of the general method usage. Reviews occurred almost as much as qualitative studies (3.91%), as the third most popular method. Mixed-methods research studies (0.98%) occurred across most themes, whereas multi-method research was indicated in only one study and amounted to 0.10% of the methods identified. The specific use of each method in the topics identified is shown in Table 2 and Figure 4 .

Research methods in psychology.

Quantitative4011626960525248283813
Qualitative28410523501
Review115203411301
Mixed Methods7000101100
Multi-method0000000010
Total4471717260615853473915

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0004.jpg

Research method frequency in topics.

Secondly, in the case of how these research methods are employed , our study indicated the following.

Sampling −78.34% of the studies in the collected articles did not specify a sampling method. From the remainder of the studies, 13 types of sampling methods were identified. These sampling methods included broad categorisation of a sample as, for example, a probability or non-probability sample. General samples of convenience were the methods most likely to be applied (10.34%), followed by random sampling (3.51%), snowball sampling (2.73%), and purposive (1.37%) and cluster sampling (1.27%). The remainder of the sampling methods occurred to a more limited extent (0–1.0%). See Table 3 and Figure 5 for sampling methods employed in each topic.

Sampling use in the field of psychology.

Not stated3311534557494343383114
Convenience sampling558101689261
Random sampling15391220211
Snowball sampling14441200300
Purposive sampling6020020310
Cluster sampling8120020000
Stratified sampling4120110000
Non-probability sampling4010000010
Probability sampling3100000000
Quota sampling1010000000
Criterion sampling1000000000
Self-selection sampling1000000000
Unsystematic sampling0100000000
Total4431727660605852484016

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0005.jpg

Sampling method frequency in topics.

Designs were categorised based on the articles' statement thereof. Therefore, it is important to note that, in the case of quantitative studies, non-experimental designs (25.55%) were often indicated due to a lack of experiments and any other indication of design, which, according to Laher ( 2016 ), is a reasonable categorisation. Non-experimental designs should thus be compared with experimental designs only in the description of data, as it could include the use of correlational/cross-sectional designs, which were not overtly stated by the authors. For the remainder of the research methods, “not stated” (7.12%) was assigned to articles without design types indicated.

From the 36 identified designs the most popular designs were cross-sectional (23.17%) and experimental (25.64%), which concurred with the high number of quantitative studies. Longitudinal studies (3.80%), the third most popular design, was used in both quantitative and qualitative studies. Qualitative designs consisted of ethnography (0.38%), interpretative phenomenological designs/phenomenology (0.28%), as well as narrative designs (0.28%). Studies that employed the review method were mostly categorised as “not stated,” with the most often stated review designs being systematic reviews (0.57%). The few mixed method studies employed exploratory, explanatory (0.09%), and concurrent designs (0.19%), with some studies referring to separate designs for the qualitative and quantitative methods. The one study that identified itself as a multi-method study used a longitudinal design. Please see how these designs were employed in each specific topic in Table 4 , Figure 6 .

Design use in the field of psychology.

Experimental design828236010128643
Non-experimental design1153051013171313143
Cross-sectional design123311211917215132
Correlational design5612301022042
Not stated377304241413
Longitudinal design21621122023
Quasi-experimental design4100002100
Systematic review3000110100
Cross-cultural design3001000100
Descriptive design2000003000
Ethnography4000000000
Literature review1100110000
Interpretative Phenomenological Analysis (IPA)2000100000
Narrative design1000001100
Case-control research design0000020000
Concurrent data collection design1000100000
Grounded Theory1000100000
Narrative review0100010000
Auto-ethnography1000000000
Case series evaluation0000000100
Case study1000000000
Comprehensive review0100000000
Descriptive-inferential0000000010
Explanatory sequential design1000000000
Exploratory mixed-method0000100100
Grounded ethnographic design0100000000
Historical cohort design0100000000
Historical research0000000100
interpretivist approach0000000100
Meta-review1000000100
Prospective design1000000000
Qualitative review0000000100
Qualitative systematic review0000010000
Short-term prospective design0100000000
Total4611757463635856483916

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0006.jpg

Design frequency in topics.

Data collection and analysis —data collection included 30 methods, with the data collection method most often employed being questionnaires (57.84%). The experimental task (16.56%) was the second most preferred collection method, which included established or unique tasks designed by the researchers. Cognitive ability tests (6.84%) were also regularly used along with various forms of interviewing (7.66%). Table 5 and Figure 7 represent data collection use in the various topics. Data analysis consisted of 3,857 occurrences of data analysis categorised into ±188 various data analysis techniques shown in Table 6 and Figures 1 – 7 . Descriptive statistics were the most commonly used (23.49%) along with correlational analysis (17.19%). When using a qualitative method, researchers generally employed thematic analysis (0.52%) or different forms of analysis that led to coding and the creation of themes. Review studies presented few data analysis methods, with most studies categorising their results. Mixed method and multi-method studies followed the analysis methods identified for the qualitative and quantitative studies included.

Data collection in the field of psychology.

Questionnaire3641136542405139243711
Experimental task68663529511551
Cognitive ability test957112615110
Physiological measure31216253010
Interview19301302201
Online scholarly literature104003401000
Open-ended questions15301312300
Semi-structured interviews10300321201
Observation10100000020
Documents5110000120
Focus group6120100000
Not stated2110001401
Public data6100000201
Drawing task0201110200
In-depth interview6000100000
Structured interview0200120010
Writing task1000400100
Questionnaire interviews1010201000
Non-experimental task4000000000
Tests2200000000
Group accounts2000000100
Open-ended prompts1100000100
Field notes2000000000
Open-ended interview2000000000
Qualitative questions0000010001
Social media1000000010
Assessment procedure0001000000
Closed-ended questions0000000100
Open discussions1000000000
Qualitative descriptions1000000000
Total55127375116797365605017

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0007.jpg

Data collection frequency in topics.

Data analysis in the field of psychology.

Not stated5120011501
Actor-Partner Interdependence Model (APIM)4000000000
Analysis of Covariance (ANCOVA)17813421001
Analysis of Variance (ANOVA)112601629151715653
Auto-regressive path coefficients0010000000
Average variance extracted (AVE)1000010000
Bartholomew's classification system1000000000
Bayesian analysis3000100000
Bibliometric analysis1100000100
Binary logistic regression1100141000
Binary multilevel regression0001000000
Binomial and Bernoulli regression models2000000000
Binomial mixed effects model1000000000
Bivariate Correlations321030435111
Bivariate logistic correlations1000010000
Bootstrapping391623516121
Canonical correlations0000000020
Cartesian diagram1000000000
Case-wise diagnostics0100001000
Casual network analysis0001000000
Categorisation5200110400
Categorisation of responses2000000000
Category codes3100010000
Cattell's scree-test0010000000
Chi-square tests52201756118743
Classic Parallel Analysis (PA)0010010010
Cluster analysis7000111101
Coded15312111210
Cohen d effect size14521323101
Common method variance (CMV)5010000000
Comprehensive Meta-Analysis (CMA)0000000010
Confidence Interval (CI)2000010000
Confirmatory Factor Analysis (CFA)5713400247131
Content analysis9100210100
Convergent validity1000000000
Cook's distance0100100000
Correlated-trait-correlated-method minus one model1000000000
Correlational analysis2598544182731348338
Covariance matrix3010000000
Covariance modelling0110000000
Covariance structure analyses2000000000
Cronbach's alpha61141865108375
Cross-validation0020000001
Cross-lagged analyses1210001000
Dependent t-test1200110100
Descriptive statistics3241324349414336282910
Differentiated analysis0000001000
Discriminate analysis1020000001
Discursive psychology1000000000
Dominance analysis1000000000
Expectation maximisation2100000100
Exploratory data Analysis1100110000
Exploratory Factor Analysis (EFA)145240114040
Exploratory structural equation modelling (ESEM)0010000010
Factor analysis124160215020
Measurement invariance testing0000000000
Four-way mixed ANOVA0101000000
Frequency rate20142122200
Friedman test1000000000
Games-Howell 2200010000
General linear model analysis1200001100
Greenhouse-Geisser correction2500001111
Grounded theory method0000000001
Grounded theory methodology using open and axial coding1000000000
Guttman split-half0010000000
Harman's one-factor test13200012000
Herman's criteria of experience categorisation0000000100
Hierarchical CFA (HCFA)0010000000
Hierarchical cluster analysis1000000000
Hierarchical Linear Modelling (HLM)762223767441
Huynh-Felt correction1000000000
Identified themes3000100000
Independent samples t-test38944483311
Inductive open coding1000000000
Inferential statistics2000001000
Interclass correlation3010000000
Internal consistency3120000000
Interpreted and defined0000100000
Interpretive Phenomenological Analysis (IPA)2100100000
Item fit analysis1050000000
K-means clustering0000000100
Kaiser-meyer-Olkin measure of sampling adequacy2080002020
Kendall's coefficients3100000000
Kolmogorov-Smirnov test1211220010
Lagged-effects multilevel modelling1100000000
Latent class differentiation (LCD)1000000000
Latent cluster analysis0000010000
Latent growth curve modelling (LGCM)1000000110
Latent means1000000000
Latent Profile Analysis (LPA)1100000000
Linear regressions691941031253130
Linguistic Inquiry and Word Count0000100000
Listwise deletion method0000010000
Log-likelihood ratios0000010000
Logistic mixed-effects model1000000000
Logistic regression analyses17010421001
Loglinear Model2000000000
Mahalanobis distances0200010000
Mann-Whitney U tests6421202400
Mauchly's test0102000101
Maximum likelihood method11390132310
Maximum-likelihood factor analysis with promax rotation0100000000
Measurement invariance testing4110100000
Mediation analysis29712435030
Meta-analysis3010000100
Microanalysis1000000000
Minimum significant difference (MSD) comparison0100000000
Mixed ANOVAs196010121410
Mixed linear model0001001000
Mixed-design ANCOVA1100000000
Mixed-effects multiple regression models1000000000
Moderated hierarchical regression model1000000000
Moderated regression analysis8400101010
Monte Carlo Markov Chains2010000000
Multi-group analysis3000000000
Multidimensional Random Coefficient Multinomial Logit (MRCML)0010000000
Multidimensional Scaling2000000000
Multiple-Group Confirmatory Factor Analysis (MGCFA)3000020000
Multilevel latent class analysis1000010000
Multilevel modelling7211100110
Multilevel Structural Equation Modelling (MSEM)2000000000
Multinominal logistic regression (MLR)1000000000
Multinominal regression analysis1000020000
Multiple Indicators Multiple Causes (MIMIC)0000110000
Multiple mediation analysis2600221000
Multiple regression341530345072
Multivariate analysis of co-variance (MANCOVA)12211011010
Multivariate Analysis of Variance (MANOVA)38845569112
Multivariate hierarchical linear regression1100000000
Multivariate linear regression0100001000
Multivariate logistic regression analyses1000000000
Multivariate regressions2100001000
Nagelkerke's R square0000010000
Narrative analysis1000001000
Negative binominal regression with log link0000010000
Newman-Keuls0100010000
Nomological Validity Analysis0010000000
One sample t-test81017464010
Ordinary Least-Square regression (OLS)2201000000
Pairwise deletion method0000010000
Pairwise parameter comparison4000002000
Parametric Analysis0001000000
Partial Least Squares regression method (PLS)1100000000
Path analysis21901245120
Path-analytic model test1000000000
Phenomenological analysis0010000100
Polynomial regression analyses1000000000
Fisher LSD0100000000
Principal axis factoring2140001000
Principal component analysis (PCA)81121103251
Pseudo-panel regression1000000000
Quantitative content analysis0000100000
Receiver operating characteristic (ROC) curve analysis2001000000
Relative weight analysis1000000000
Repeated measures analyses of variances (rANOVA)182217521111
Ryan-Einot-Gabriel-Welsch multiple F test1000000000
Satorra-Bentler scaled chi-square statistic0030000000
Scheffe's test3000010000
Sequential multiple mediation analysis1000000000
Shapiro-Wilk test2302100000
Sobel Test13501024000
Squared multiple correlations1000000000
Squared semi-partial correlations (sr2)2000000000
Stepwise regression analysis3200100020
Structural Equation Modelling (SEM)562233355053
Structure analysis0000001000
Subsequent t-test0000100000
Systematic coding- Gemeinschaft-oriented1000100000
Task analysis2000000000
Thematic analysis11200302200
Three (condition)-way ANOVA0400101000
Three-way hierarchical loglinear analysis0200000000
Tukey-Kramer corrections0001010000
Two-paired sample t-test7611031101
Two-tailed related t-test0110100000
Unadjusted Logistic regression analysis0100000000
Univariate generalized linear models (GLM)2000000000
Variance inflation factor (VIF)3100000010
Variance-covariance matrix1000000100
Wald test1100000000
Ward's hierarchical cluster method0000000001
Weighted least squares with corrections to means and variances (WLSMV)2000000000
Welch and Brown-Forsythe F-ratios0100010000
Wilcoxon signed-rank test3302000201
Wilks' Lamba6000001000
Word analysis0000000100
Word Association Analysis1000000000
scores5610110100
Total173863532919219823722511715255

Results of the topics researched in psychology can be seen in the tables, as previously stated in this article. It is noteworthy that, of the 10 topics, social psychology accounted for 43.54% of the studies, with cognitive psychology the second most popular research topic at 16.92%. The remainder of the topics only occurred in 4.0–7.0% of the articles considered. A list of the included 999 articles is available under the section “View Articles” on the following website: https://methodgarden.xtrapolate.io/ . This website was created by Scholtz et al. ( 2019 ) to visually present a research framework based on this Article's results.

This systematised review categorised full-length articles from five international journals across the span of 5 years to provide insight into the use of research methods in the field of psychology. Results indicated what methods are used how these methods are being used and for what topics (why) in the included sample of articles. The results should be seen as providing insight into method use and by no means a comprehensive representation of the aforementioned aim due to the limited sample. To our knowledge, this is the first research study to address this topic in this manner. Our discussion attempts to promote a productive way forward in terms of the key results for method use in psychology, especially in the field of academia (Holloway, 2008 ).

With regard to the methods used, our data stayed true to literature, finding only common research methods (Grant and Booth, 2009 ; Maree, 2016 ) that varied in the degree to which they were employed. Quantitative research was found to be the most popular method, as indicated by literature (Breen and Darlaston-Jones, 2010 ; Counsell and Harlow, 2017 ) and previous studies in specific areas of psychology (see Coetzee and Van Zyl, 2014 ). Its long history as the first research method (Leech et al., 2007 ) in the field of psychology as well as researchers' current application of mathematical approaches in their studies (Toomela, 2010 ) might contribute to its popularity today. Whatever the case may be, our results show that, despite the growth in qualitative research (Demuth, 2015 ; Smith and McGannon, 2018 ), quantitative research remains the first choice for article publication in these journals. Despite the included journals indicating openness to articles that apply any research methods. This finding may be due to qualitative research still being seen as a new method (Burman and Whelan, 2011 ) or reviewers' standards being higher for qualitative studies (Bluhm et al., 2011 ). Future research is encouraged into the possible biasness in publication of research methods, additionally further investigation with a different sample into the proclaimed growth of qualitative research may also provide different results.

Review studies were found to surpass that of multi-method and mixed method studies. To this effect Grant and Booth ( 2009 ), state that the increased awareness, journal contribution calls as well as its efficiency in procuring research funds all promote the popularity of reviews. The low frequency of mixed method studies contradicts the view in literature that it's the third most utilised research method (Tashakkori and Teddlie's, 2003 ). Its' low occurrence in this sample could be due to opposing views on mixing methods (Gunasekare, 2015 ) or that authors prefer publishing in mixed method journals, when using this method, or its relative novelty (Ivankova et al., 2016 ). Despite its low occurrence, the application of the mixed methods design in articles was methodologically clear in all cases which were not the case for the remainder of research methods.

Additionally, a substantial number of studies used a combination of methodologies that are not mixed or multi-method studies. Perceived fixed boundaries are according to literature often set aside, as confirmed by this result, in order to investigate the aim of a study, which could create a new and helpful way of understanding the world (Gunasekare, 2015 ). According to Toomela ( 2010 ), this is not unheard of and could be considered a form of “structural systemic science,” as in the case of qualitative methodology (observation) applied in quantitative studies (experimental design) for example. Based on this result, further research into this phenomenon as well as its implications for research methods such as multi and mixed methods is recommended.

Discerning how these research methods were applied, presented some difficulty. In the case of sampling, most studies—regardless of method—did mention some form of inclusion and exclusion criteria, but no definite sampling method. This result, along with the fact that samples often consisted of students from the researchers' own academic institutions, can contribute to literature and debates among academics (Peterson and Merunka, 2014 ; Laher, 2016 ). Samples of convenience and students as participants especially raise questions about the generalisability and applicability of results (Peterson and Merunka, 2014 ). This is because attention to sampling is important as inappropriate sampling can debilitate the legitimacy of interpretations (Onwuegbuzie and Collins, 2017 ). Future investigation into the possible implications of this reported popular use of convenience samples for the field of psychology as well as the reason for this use could provide interesting insight, and is encouraged by this study.

Additionally, and this is indicated in Table 6 , articles seldom report the research designs used, which highlights the pressing aspect of the lack of rigour in the included sample. Rigour with regards to the applied empirical method is imperative in promoting psychology as a science (American Psychological Association, 2020 ). Omitting parts of the research process in publication when it could have been used to inform others' research skills should be questioned, and the influence on the process of replicating results should be considered. Publications are often rejected due to a lack of rigour in the applied method and designs (Fonseca, 2013 ; Laher, 2016 ), calling for increased clarity and knowledge of method application. Replication is a critical part of any field of scientific research and requires the “complete articulation” of the study methods used (Drotar, 2010 , p. 804). The lack of thorough description could be explained by the requirements of certain journals to only report on certain aspects of a research process, especially with regard to the applied design (Laher, 20). However, naming aspects such as sampling and designs, is a requirement according to the APA's Journal Article Reporting Standards (JARS-Quant) (Appelbaum et al., 2018 ). With very little information on how a study was conducted, authors lose a valuable opportunity to enhance research validity, enrich the knowledge of others, and contribute to the growth of psychology and methodology as a whole. In the case of this research study, it also restricted our results to only reported samples and designs, which indicated a preference for certain designs, such as cross-sectional designs for quantitative studies.

Data collection and analysis were for the most part clearly stated. A key result was the versatile use of questionnaires. Researchers would apply a questionnaire in various ways, for example in questionnaire interviews, online surveys, and written questionnaires across most research methods. This may highlight a trend for future research.

With regard to the topics these methods were employed for, our research study found a new field named “psychological practice.” This result may show the growing consciousness of researchers as part of the research process (Denzin and Lincoln, 2003 ), psychological practice, and knowledge generation. The most popular of these topics was social psychology, which is generously covered in journals and by learning societies, as testaments of the institutional support and richness social psychology has in the field of psychology (Chryssochoou, 2015 ). The APA's perspective on 2018 trends in psychology also identifies an increased amount of psychology focus on how social determinants are influencing people's health (Deangelis, 2017 ).

This study was not without limitations and the following should be taken into account. Firstly, this study used a sample of five specific journals to address the aim of the research study, despite general journal aims (as stated on journal websites), this inclusion signified a bias towards the research methods published in these specific journals only and limited generalisability. A broader sample of journals over a different period of time, or a single journal over a longer period of time might provide different results. A second limitation is the use of Excel spreadsheets and an electronic system to log articles, which was a manual process and therefore left room for error (Bandara et al., 2015 ). To address this potential issue, co-coding was performed to reduce error. Lastly, this article categorised data based on the information presented in the article sample; there was no interpretation of what methodology could have been applied or whether the methods stated adhered to the criteria for the methods used. Thus, a large number of articles that did not clearly indicate a research method or design could influence the results of this review. However, this in itself was also a noteworthy result. Future research could review research methods of a broader sample of journals with an interpretive review tool that increases rigour. Additionally, the authors also encourage the future use of systematised review designs as a way to promote a concise procedure in applying this design.

Our research study presented the use of research methods for published articles in the field of psychology as well as recommendations for future research based on these results. Insight into the complex questions identified in literature, regarding what methods are used how these methods are being used and for what topics (why) was gained. This sample preferred quantitative methods, used convenience sampling and presented a lack of rigorous accounts for the remaining methodologies. All methodologies that were clearly indicated in the sample were tabulated to allow researchers insight into the general use of methods and not only the most frequently used methods. The lack of rigorous account of research methods in articles was represented in-depth for each step in the research process and can be of vital importance to address the current replication crisis within the field of psychology. Recommendations for future research aimed to motivate research into the practical implications of the results for psychology, for example, publication bias and the use of convenience samples.

Ethics Statement

This study was cleared by the North-West University Health Research Ethics Committee: NWU-00115-17-S1.

Author Contributions

All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

  • Aanstoos C. M. (2014). Psychology . Available online at: http://eds.a.ebscohost.com.nwulib.nwu.ac.za/eds/detail/detail?sid=18de6c5c-2b03-4eac-94890145eb01bc70%40sessionmgr4006&vid$=$1&hid$=$4113&bdata$=$JnNpdGU9ZWRzL~WxpdmU%3d#AN$=$93871882&db$=$ers
  • American Psychological Association (2020). Science of Psychology . Available online at: https://www.apa.org/action/science/
  • Appelbaum M., Cooper H., Kline R. B., Mayo-Wilson E., Nezu A. M., Rao S. M. (2018). Journal article reporting standards for quantitative research in psychology: the APA Publications and Communications Board task force report . Am. Psychol. 73 :3. 10.1037/amp0000191 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bandara W., Furtmueller E., Gorbacheva E., Miskon S., Beekhuyzen J. (2015). Achieving rigor in literature reviews: insights from qualitative data analysis and tool-support . Commun. Ass. Inform. Syst. 37 , 154–204. 10.17705/1CAIS.03708 [ CrossRef ] [ Google Scholar ]
  • Barr-Walker J. (2017). Evidence-based information needs of public health workers: a systematized review . J. Med. Libr. Assoc. 105 , 69–79. 10.5195/JMLA.2017.109 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bittermann A., Fischer A. (2018). How to identify hot topics in psychology using topic modeling . Z. Psychol. 226 , 3–13. 10.1027/2151-2604/a000318 [ CrossRef ] [ Google Scholar ]
  • Bluhm D. J., Harman W., Lee T. W., Mitchell T. R. (2011). Qualitative research in management: a decade of progress . J. Manage. Stud. 48 , 1866–1891. 10.1111/j.1467-6486.2010.00972.x [ CrossRef ] [ Google Scholar ]
  • Breen L. J., Darlaston-Jones D. (2010). Moving beyond the enduring dominance of positivism in psychological research: implications for psychology in Australia . Aust. Psychol. 45 , 67–76. 10.1080/00050060903127481 [ CrossRef ] [ Google Scholar ]
  • Burman E., Whelan P. (2011). Problems in / of Qualitative Research . Maidenhead: Open University Press/McGraw Hill. [ Google Scholar ]
  • Chaichanasakul A., He Y., Chen H., Allen G. E. K., Khairallah T. S., Ramos K. (2011). Journal of Career Development: a 36-year content analysis (1972–2007) . J. Career. Dev. 38 , 440–455. 10.1177/0894845310380223 [ CrossRef ] [ Google Scholar ]
  • Chryssochoou X. (2015). Social Psychology . Inter. Encycl. Soc. Behav. Sci. 22 , 532–537. 10.1016/B978-0-08-097086-8.24095-6 [ CrossRef ] [ Google Scholar ]
  • Cichocka A., Jost J. T. (2014). Stripped of illusions? Exploring system justification processes in capitalist and post-Communist societies . Inter. J. Psychol. 49 , 6–29. 10.1002/ijop.12011 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Clay R. A. (2017). Psychology is More Popular Than Ever. Monitor on Psychology: Trends Report . Available online at: https://www.apa.org/monitor/2017/11/trends-popular
  • Coetzee M., Van Zyl L. E. (2014). A review of a decade's scholarly publications (2004–2013) in the South African Journal of Industrial Psychology . SA. J. Psychol . 40 , 1–16. 10.4102/sajip.v40i1.1227 [ CrossRef ] [ Google Scholar ]
  • Counsell A., Harlow L. (2017). Reporting practices and use of quantitative methods in Canadian journal articles in psychology . Can. Psychol. 58 , 140–147. 10.1037/cap0000074 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Deangelis T. (2017). Targeting Social Factors That Undermine Health. Monitor on Psychology: Trends Report . Available online at: https://www.apa.org/monitor/2017/11/trend-social-factors
  • Demuth C. (2015). New directions in qualitative research in psychology . Integr. Psychol. Behav. Sci. 49 , 125–133. 10.1007/s12124-015-9303-9 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Denzin N. K., Lincoln Y. (2003). The Landscape of Qualitative Research: Theories and Issues , 2nd Edn. London: Sage. [ Google Scholar ]
  • Drotar D. (2010). A call for replications of research in pediatric psychology and guidance for authors . J. Pediatr. Psychol. 35 , 801–805. 10.1093/jpepsy/jsq049 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dweck C. S. (2017). Is psychology headed in the right direction? Yes, no, and maybe . Perspect. Psychol. Sci. 12 , 656–659. 10.1177/1745691616687747 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Earp B. D., Trafimow D. (2015). Replication, falsification, and the crisis of confidence in social psychology . Front. Psychol. 6 :621. 10.3389/fpsyg.2015.00621 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ezeh A. C., Izugbara C. O., Kabiru C. W., Fonn S., Kahn K., Manderson L., et al.. (2010). Building capacity for public and population health research in Africa: the consortium for advanced research training in Africa (CARTA) model . Glob. Health Action 3 :5693. 10.3402/gha.v3i0.5693 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ferreira A. L. L., Bessa M. M. M., Drezett J., De Abreu L. C. (2016). Quality of life of the woman carrier of endometriosis: systematized review . Reprod. Clim. 31 , 48–54. 10.1016/j.recli.2015.12.002 [ CrossRef ] [ Google Scholar ]
  • Fonseca M. (2013). Most Common Reasons for Journal Rejections . Available online at: http://www.editage.com/insights/most-common-reasons-for-journal-rejections
  • Gough B., Lyons A. (2016). The future of qualitative research in psychology: accentuating the positive . Integr. Psychol. Behav. Sci. 50 , 234–243. 10.1007/s12124-015-9320-8 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Grant M. J., Booth A. (2009). A typology of reviews: an analysis of 14 review types and associated methodologies . Health Info. Libr. J. 26 , 91–108. 10.1111/j.1471-1842.2009.00848.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Grix J. (2002). Introducing students to the generic terminology of social research . Politics 22 , 175–186. 10.1111/1467-9256.00173 [ CrossRef ] [ Google Scholar ]
  • Gunasekare U. L. T. P. (2015). Mixed research method as the third research paradigm: a literature review . Int. J. Sci. Res. 4 , 361–368. Available online at: https://ssrn.com/abstract=2735996 [ Google Scholar ]
  • Hengartner M. P. (2018). Raising awareness for the replication crisis in clinical psychology by focusing on inconsistencies in psychotherapy Research: how much can we rely on published findings from efficacy trials? Front. Psychol. 9 :256. 10.3389/fpsyg.2018.00256 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Holloway W. (2008). Doing intellectual disagreement differently . Psychoanal. Cult. Soc. 13 , 385–396. 10.1057/pcs.2008.29 [ CrossRef ] [ Google Scholar ]
  • Ivankova N. V., Creswell J. W., Plano Clark V. L. (2016). Foundations and Approaches to mixed methods research , in First Steps in Research , 2nd Edn. K. Maree (Pretoria: Van Schaick Publishers; ), 306–335. [ Google Scholar ]
  • Johnson M., Long T., White A. (2001). Arguments for British pluralism in qualitative health research . J. Adv. Nurs. 33 , 243–249. 10.1046/j.1365-2648.2001.01659.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Johnston A., Kelly S. E., Hsieh S. C., Skidmore B., Wells G. A. (2019). Systematic reviews of clinical practice guidelines: a methodological guide . J. Clin. Epidemiol. 108 , 64–72. 10.1016/j.jclinepi.2018.11.030 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ketchen D. J., Jr., Boyd B. K., Bergh D. D. (2008). Research methodology in strategic management: past accomplishments and future challenges . Organ. Res. Methods 11 , 643–658. 10.1177/1094428108319843 [ CrossRef ] [ Google Scholar ]
  • Ktepi B. (2016). Data Analytics (DA) . Available online at: https://eds-b-ebscohost-com.nwulib.nwu.ac.za/eds/detail/detail?vid=2&sid=24c978f0-6685-4ed8-ad85-fa5bb04669b9%40sessionmgr101&bdata=JnNpdGU9ZWRzLWxpdmU%3d#AN=113931286&db=ers
  • Laher S. (2016). Ostinato rigore: establishing methodological rigour in quantitative research . S. Afr. J. Psychol. 46 , 316–327. 10.1177/0081246316649121 [ CrossRef ] [ Google Scholar ]
  • Lee C. (2015). The Myth of the Off-Limits Source . Available online at: http://blog.apastyle.org/apastyle/research/
  • Lee T. W., Mitchell T. R., Sablynski C. J. (1999). Qualitative research in organizational and vocational psychology, 1979–1999 . J. Vocat. Behav. 55 , 161–187. 10.1006/jvbe.1999.1707 [ CrossRef ] [ Google Scholar ]
  • Leech N. L., Anthony J., Onwuegbuzie A. J. (2007). A typology of mixed methods research designs . Sci. Bus. Media B. V Qual. Quant 43 , 265–275. 10.1007/s11135-007-9105-3 [ CrossRef ] [ Google Scholar ]
  • Levitt H. M., Motulsky S. L., Wertz F. J., Morrow S. L., Ponterotto J. G. (2017). Recommendations for designing and reviewing qualitative research in psychology: promoting methodological integrity . Qual. Psychol. 4 , 2–22. 10.1037/qup0000082 [ CrossRef ] [ Google Scholar ]
  • Lowe S. M., Moore S. (2014). Social networks and female reproductive choices in the developing world: a systematized review . Rep. Health 11 :85. 10.1186/1742-4755-11-85 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Maree K. (2016). Planning a research proposal , in First Steps in Research , 2nd Edn, ed Maree K. (Pretoria: Van Schaik Publishers; ), 49–70. [ Google Scholar ]
  • Maree K., Pietersen J. (2016). Sampling , in First Steps in Research, 2nd Edn , ed Maree K. (Pretoria: Van Schaik Publishers; ), 191–202. [ Google Scholar ]
  • Ngulube P. (2013). Blending qualitative and quantitative research methods in library and information science in sub-Saharan Africa . ESARBICA J. 32 , 10–23. Available online at: http://hdl.handle.net/10500/22397 . [ Google Scholar ]
  • Nieuwenhuis J. (2016). Qualitative research designs and data-gathering techniques , in First Steps in Research , 2nd Edn, ed Maree K. (Pretoria: Van Schaik Publishers; ), 71–102. [ Google Scholar ]
  • Nind M., Kilburn D., Wiles R. (2015). Using video and dialogue to generate pedagogic knowledge: teachers, learners and researchers reflecting together on the pedagogy of social research methods . Int. J. Soc. Res. Methodol. 18 , 561–576. 10.1080/13645579.2015.1062628 [ CrossRef ] [ Google Scholar ]
  • O'Cathain A. (2009). Editorial: mixed methods research in the health sciences—a quiet revolution . J. Mix. Methods 3 , 1–6. 10.1177/1558689808326272 [ CrossRef ] [ Google Scholar ]
  • O'Neil S., Koekemoer E. (2016). Two decades of qualitative research in psychology, industrial and organisational psychology and human resource management within South Africa: a critical review . SA J. Indust. Psychol. 42 , 1–16. 10.4102/sajip.v42i1.1350 [ CrossRef ] [ Google Scholar ]
  • Onwuegbuzie A. J., Collins K. M. (2017). The role of sampling in mixed methods research enhancing inference quality . Köln Z Soziol. 2 , 133–156. 10.1007/s11577-017-0455-0 [ CrossRef ] [ Google Scholar ]
  • Perestelo-Pérez L. (2013). Standards on how to develop and report systematic reviews in psychology and health . Int. J. Clin. Health Psychol. 13 , 49–57. 10.1016/S1697-2600(13)70007-3 [ CrossRef ] [ Google Scholar ]
  • Pericall L. M. T., Taylor E. (2014). Family function and its relationship to injury severity and psychiatric outcome in children with acquired brain injury: a systematized review . Dev. Med. Child Neurol. 56 , 19–30. 10.1111/dmcn.12237 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Peterson R. A., Merunka D. R. (2014). Convenience samples of college students and research reproducibility . J. Bus. Res. 67 , 1035–1041. 10.1016/j.jbusres.2013.08.010 [ CrossRef ] [ Google Scholar ]
  • Ritchie J., Lewis J., Elam G. (2009). Designing and selecting samples , in Qualitative Research Practice: A Guide for Social Science Students and Researchers , 2nd Edn, ed Ritchie J., Lewis J. (London: Sage; ), 1–23. [ Google Scholar ]
  • Sandelowski M. (2011). When a cigar is not just a cigar: alternative perspectives on data and data analysis . Res. Nurs. Health 34 , 342–352. 10.1002/nur.20437 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sandelowski M., Voils C. I., Knafl G. (2009). On quantitizing . J. Mix. Methods Res. 3 , 208–222. 10.1177/1558689809334210 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Scholtz S. E., De Klerk W., De Beer L. T. (2019). A data generated research framework for conducting research methods in psychological research .
  • Scimago Journal & Country Rank (2017). Available online at: http://www.scimagojr.com/journalrank.php?category=3201&year=2015
  • Scopus (2017a). About Scopus . Available online at: https://www.scopus.com/home.uri (accessed February 01, 2017).
  • Scopus (2017b). Document Search . Available online at: https://www.scopus.com/home.uri (accessed February 01, 2017).
  • Scott Jones J., Goldring J. E. (2015). ‘I' m not a quants person'; key strategies in building competence and confidence in staff who teach quantitative research methods . Int. J. Soc. Res. Methodol. 18 , 479–494. 10.1080/13645579.2015.1062623 [ CrossRef ] [ Google Scholar ]
  • Smith B., McGannon K. R. (2018). Developing rigor in quantitative research: problems and opportunities within sport and exercise psychology . Int. Rev. Sport Exerc. Psychol. 11 , 101–121. 10.1080/1750984X.2017.1317357 [ CrossRef ] [ Google Scholar ]
  • Stangor C. (2011). Introduction to Psychology . Available online at: http://www.saylor.org/books/
  • Strydom H. (2011). Sampling in the quantitative paradigm , in Research at Grass Roots; For the Social Sciences and Human Service Professions , 4th Edn, eds de Vos A. S., Strydom H., Fouché C. B., Delport C. S. L. (Pretoria: Van Schaik Publishers; ), 221–234. [ Google Scholar ]
  • Tashakkori A., Teddlie C. (2003). Handbook of Mixed Methods in Social & Behavioural Research . Thousand Oaks, CA: SAGE publications. [ Google Scholar ]
  • Toomela A. (2010). Quantitative methods in psychology: inevitable and useless . Front. Psychol. 1 :29. 10.3389/fpsyg.2010.00029 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Truscott D. M., Swars S., Smith S., Thornton-Reid F., Zhao Y., Dooley C., et al.. (2010). A cross-disciplinary examination of the prevalence of mixed methods in educational research: 1995–2005 . Int. J. Soc. Res. Methodol. 13 , 317–328. 10.1080/13645570903097950 [ CrossRef ] [ Google Scholar ]
  • Weiten W. (2010). Psychology Themes and Variations . Belmont, CA: Wadsworth. [ Google Scholar ]
  • Privacy Policy

Research Method

Home » Research Methodology – Types, Examples and writing Guide

Research Methodology – Types, Examples and writing Guide

Table of Contents

Research Methodology

Research Methodology

Definition:

Research Methodology refers to the systematic and scientific approach used to conduct research, investigate problems, and gather data and information for a specific purpose. It involves the techniques and procedures used to identify, collect , analyze , and interpret data to answer research questions or solve research problems . Moreover, They are philosophical and theoretical frameworks that guide the research process.

Structure of Research Methodology

Research methodology formats can vary depending on the specific requirements of the research project, but the following is a basic example of a structure for a research methodology section:

I. Introduction

  • Provide an overview of the research problem and the need for a research methodology section
  • Outline the main research questions and objectives

II. Research Design

  • Explain the research design chosen and why it is appropriate for the research question(s) and objectives
  • Discuss any alternative research designs considered and why they were not chosen
  • Describe the research setting and participants (if applicable)

III. Data Collection Methods

  • Describe the methods used to collect data (e.g., surveys, interviews, observations)
  • Explain how the data collection methods were chosen and why they are appropriate for the research question(s) and objectives
  • Detail any procedures or instruments used for data collection

IV. Data Analysis Methods

  • Describe the methods used to analyze the data (e.g., statistical analysis, content analysis )
  • Explain how the data analysis methods were chosen and why they are appropriate for the research question(s) and objectives
  • Detail any procedures or software used for data analysis

V. Ethical Considerations

  • Discuss any ethical issues that may arise from the research and how they were addressed
  • Explain how informed consent was obtained (if applicable)
  • Detail any measures taken to ensure confidentiality and anonymity

VI. Limitations

  • Identify any potential limitations of the research methodology and how they may impact the results and conclusions

VII. Conclusion

  • Summarize the key aspects of the research methodology section
  • Explain how the research methodology addresses the research question(s) and objectives

Research Methodology Types

Types of Research Methodology are as follows:

Quantitative Research Methodology

This is a research methodology that involves the collection and analysis of numerical data using statistical methods. This type of research is often used to study cause-and-effect relationships and to make predictions.

Qualitative Research Methodology

This is a research methodology that involves the collection and analysis of non-numerical data such as words, images, and observations. This type of research is often used to explore complex phenomena, to gain an in-depth understanding of a particular topic, and to generate hypotheses.

Mixed-Methods Research Methodology

This is a research methodology that combines elements of both quantitative and qualitative research. This approach can be particularly useful for studies that aim to explore complex phenomena and to provide a more comprehensive understanding of a particular topic.

Case Study Research Methodology

This is a research methodology that involves in-depth examination of a single case or a small number of cases. Case studies are often used in psychology, sociology, and anthropology to gain a detailed understanding of a particular individual or group.

Action Research Methodology

This is a research methodology that involves a collaborative process between researchers and practitioners to identify and solve real-world problems. Action research is often used in education, healthcare, and social work.

Experimental Research Methodology

This is a research methodology that involves the manipulation of one or more independent variables to observe their effects on a dependent variable. Experimental research is often used to study cause-and-effect relationships and to make predictions.

Survey Research Methodology

This is a research methodology that involves the collection of data from a sample of individuals using questionnaires or interviews. Survey research is often used to study attitudes, opinions, and behaviors.

Grounded Theory Research Methodology

This is a research methodology that involves the development of theories based on the data collected during the research process. Grounded theory is often used in sociology and anthropology to generate theories about social phenomena.

Research Methodology Example

An Example of Research Methodology could be the following:

Research Methodology for Investigating the Effectiveness of Cognitive Behavioral Therapy in Reducing Symptoms of Depression in Adults

Introduction:

The aim of this research is to investigate the effectiveness of cognitive-behavioral therapy (CBT) in reducing symptoms of depression in adults. To achieve this objective, a randomized controlled trial (RCT) will be conducted using a mixed-methods approach.

Research Design:

The study will follow a pre-test and post-test design with two groups: an experimental group receiving CBT and a control group receiving no intervention. The study will also include a qualitative component, in which semi-structured interviews will be conducted with a subset of participants to explore their experiences of receiving CBT.

Participants:

Participants will be recruited from community mental health clinics in the local area. The sample will consist of 100 adults aged 18-65 years old who meet the diagnostic criteria for major depressive disorder. Participants will be randomly assigned to either the experimental group or the control group.

Intervention :

The experimental group will receive 12 weekly sessions of CBT, each lasting 60 minutes. The intervention will be delivered by licensed mental health professionals who have been trained in CBT. The control group will receive no intervention during the study period.

Data Collection:

Quantitative data will be collected through the use of standardized measures such as the Beck Depression Inventory-II (BDI-II) and the Generalized Anxiety Disorder-7 (GAD-7). Data will be collected at baseline, immediately after the intervention, and at a 3-month follow-up. Qualitative data will be collected through semi-structured interviews with a subset of participants from the experimental group. The interviews will be conducted at the end of the intervention period, and will explore participants’ experiences of receiving CBT.

Data Analysis:

Quantitative data will be analyzed using descriptive statistics, t-tests, and mixed-model analyses of variance (ANOVA) to assess the effectiveness of the intervention. Qualitative data will be analyzed using thematic analysis to identify common themes and patterns in participants’ experiences of receiving CBT.

Ethical Considerations:

This study will comply with ethical guidelines for research involving human subjects. Participants will provide informed consent before participating in the study, and their privacy and confidentiality will be protected throughout the study. Any adverse events or reactions will be reported and managed appropriately.

Data Management:

All data collected will be kept confidential and stored securely using password-protected databases. Identifying information will be removed from qualitative data transcripts to ensure participants’ anonymity.

Limitations:

One potential limitation of this study is that it only focuses on one type of psychotherapy, CBT, and may not generalize to other types of therapy or interventions. Another limitation is that the study will only include participants from community mental health clinics, which may not be representative of the general population.

Conclusion:

This research aims to investigate the effectiveness of CBT in reducing symptoms of depression in adults. By using a randomized controlled trial and a mixed-methods approach, the study will provide valuable insights into the mechanisms underlying the relationship between CBT and depression. The results of this study will have important implications for the development of effective treatments for depression in clinical settings.

How to Write Research Methodology

Writing a research methodology involves explaining the methods and techniques you used to conduct research, collect data, and analyze results. It’s an essential section of any research paper or thesis, as it helps readers understand the validity and reliability of your findings. Here are the steps to write a research methodology:

  • Start by explaining your research question: Begin the methodology section by restating your research question and explaining why it’s important. This helps readers understand the purpose of your research and the rationale behind your methods.
  • Describe your research design: Explain the overall approach you used to conduct research. This could be a qualitative or quantitative research design, experimental or non-experimental, case study or survey, etc. Discuss the advantages and limitations of the chosen design.
  • Discuss your sample: Describe the participants or subjects you included in your study. Include details such as their demographics, sampling method, sample size, and any exclusion criteria used.
  • Describe your data collection methods : Explain how you collected data from your participants. This could include surveys, interviews, observations, questionnaires, or experiments. Include details on how you obtained informed consent, how you administered the tools, and how you minimized the risk of bias.
  • Explain your data analysis techniques: Describe the methods you used to analyze the data you collected. This could include statistical analysis, content analysis, thematic analysis, or discourse analysis. Explain how you dealt with missing data, outliers, and any other issues that arose during the analysis.
  • Discuss the validity and reliability of your research : Explain how you ensured the validity and reliability of your study. This could include measures such as triangulation, member checking, peer review, or inter-coder reliability.
  • Acknowledge any limitations of your research: Discuss any limitations of your study, including any potential threats to validity or generalizability. This helps readers understand the scope of your findings and how they might apply to other contexts.
  • Provide a summary: End the methodology section by summarizing the methods and techniques you used to conduct your research. This provides a clear overview of your research methodology and helps readers understand the process you followed to arrive at your findings.

When to Write Research Methodology

Research methodology is typically written after the research proposal has been approved and before the actual research is conducted. It should be written prior to data collection and analysis, as it provides a clear roadmap for the research project.

The research methodology is an important section of any research paper or thesis, as it describes the methods and procedures that will be used to conduct the research. It should include details about the research design, data collection methods, data analysis techniques, and any ethical considerations.

The methodology should be written in a clear and concise manner, and it should be based on established research practices and standards. It is important to provide enough detail so that the reader can understand how the research was conducted and evaluate the validity of the results.

Applications of Research Methodology

Here are some of the applications of research methodology:

  • To identify the research problem: Research methodology is used to identify the research problem, which is the first step in conducting any research.
  • To design the research: Research methodology helps in designing the research by selecting the appropriate research method, research design, and sampling technique.
  • To collect data: Research methodology provides a systematic approach to collect data from primary and secondary sources.
  • To analyze data: Research methodology helps in analyzing the collected data using various statistical and non-statistical techniques.
  • To test hypotheses: Research methodology provides a framework for testing hypotheses and drawing conclusions based on the analysis of data.
  • To generalize findings: Research methodology helps in generalizing the findings of the research to the target population.
  • To develop theories : Research methodology is used to develop new theories and modify existing theories based on the findings of the research.
  • To evaluate programs and policies : Research methodology is used to evaluate the effectiveness of programs and policies by collecting data and analyzing it.
  • To improve decision-making: Research methodology helps in making informed decisions by providing reliable and valid data.

Purpose of Research Methodology

Research methodology serves several important purposes, including:

  • To guide the research process: Research methodology provides a systematic framework for conducting research. It helps researchers to plan their research, define their research questions, and select appropriate methods and techniques for collecting and analyzing data.
  • To ensure research quality: Research methodology helps researchers to ensure that their research is rigorous, reliable, and valid. It provides guidelines for minimizing bias and error in data collection and analysis, and for ensuring that research findings are accurate and trustworthy.
  • To replicate research: Research methodology provides a clear and detailed account of the research process, making it possible for other researchers to replicate the study and verify its findings.
  • To advance knowledge: Research methodology enables researchers to generate new knowledge and to contribute to the body of knowledge in their field. It provides a means for testing hypotheses, exploring new ideas, and discovering new insights.
  • To inform decision-making: Research methodology provides evidence-based information that can inform policy and decision-making in a variety of fields, including medicine, public health, education, and business.

Advantages of Research Methodology

Research methodology has several advantages that make it a valuable tool for conducting research in various fields. Here are some of the key advantages of research methodology:

  • Systematic and structured approach : Research methodology provides a systematic and structured approach to conducting research, which ensures that the research is conducted in a rigorous and comprehensive manner.
  • Objectivity : Research methodology aims to ensure objectivity in the research process, which means that the research findings are based on evidence and not influenced by personal bias or subjective opinions.
  • Replicability : Research methodology ensures that research can be replicated by other researchers, which is essential for validating research findings and ensuring their accuracy.
  • Reliability : Research methodology aims to ensure that the research findings are reliable, which means that they are consistent and can be depended upon.
  • Validity : Research methodology ensures that the research findings are valid, which means that they accurately reflect the research question or hypothesis being tested.
  • Efficiency : Research methodology provides a structured and efficient way of conducting research, which helps to save time and resources.
  • Flexibility : Research methodology allows researchers to choose the most appropriate research methods and techniques based on the research question, data availability, and other relevant factors.
  • Scope for innovation: Research methodology provides scope for innovation and creativity in designing research studies and developing new research techniques.

Research Methodology Vs Research Methods

Research MethodologyResearch Methods
Research methodology refers to the philosophical and theoretical frameworks that guide the research process. refer to the techniques and procedures used to collect and analyze data.
It is concerned with the underlying principles and assumptions of research.It is concerned with the practical aspects of research.
It provides a rationale for why certain research methods are used.It determines the specific steps that will be taken to conduct research.
It is broader in scope and involves understanding the overall approach to research.It is narrower in scope and focuses on specific techniques and tools used in research.
It is concerned with identifying research questions, defining the research problem, and formulating hypotheses.It is concerned with collecting data, analyzing data, and interpreting results.
It is concerned with the validity and reliability of research.It is concerned with the accuracy and precision of data.
It is concerned with the ethical considerations of research.It is concerned with the practical considerations of research.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Paper Abstract

Research Paper Abstract – Writing Guide and...

Research Recommendations

Research Recommendations – Examples and Writing...

APA Research Paper Format

APA Research Paper Format – Example, Sample and...

Research Findings

Research Findings – Types Examples and Writing...

Research Results

Research Results Section – Writing Guide and...

Conceptual Framework

Conceptual Framework – Types, Methodology and...

  • Open access
  • Published: 07 September 2020

A tutorial on methodological studies: the what, when, how and why

  • Lawrence Mbuagbaw   ORCID: orcid.org/0000-0001-5855-5461 1 , 2 , 3 ,
  • Daeria O. Lawson 1 ,
  • Livia Puljak 4 ,
  • David B. Allison 5 &
  • Lehana Thabane 1 , 2 , 6 , 7 , 8  

BMC Medical Research Methodology volume  20 , Article number:  226 ( 2020 ) Cite this article

41k Accesses

57 Citations

59 Altmetric

Metrics details

Methodological studies – studies that evaluate the design, analysis or reporting of other research-related reports – play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.

We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a “frequently asked questions” format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies?

Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.

Peer Review reports

The field of meta-research (or research-on-research) has proliferated in recent years in response to issues with research quality and conduct [ 1 , 2 , 3 ]. As the name suggests, this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries). Like many other novel fields of research, meta-research has seen a proliferation of use before the development of reporting guidance. For example, this was the case with randomized trials for which risk of bias tools and reporting guidelines were only developed much later – after many trials had been published and noted to have limitations [ 4 , 5 ]; and for systematic reviews as well [ 6 , 7 , 8 ]. However, in the absence of formal guidance, studies that report on research differ substantially in how they are named, conducted and reported [ 9 , 10 ]. This creates challenges in identifying, summarizing and comparing them. In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts).

In the past 10 years, there has been an increase in the use of terms related to methodological studies (based on records retrieved with a keyword search [in the title and abstract] for “methodological review” and “meta-epidemiological study” in PubMed up to December 2019), suggesting that these studies may be appearing more frequently in the literature. See Fig.  1 .

figure 1

Trends in the number studies that mention “methodological review” or “meta-

epidemiological study” in PubMed.

The methods used in many methodological studies have been borrowed from systematic and scoping reviews. This practice has influenced the direction of the field, with many methodological studies including searches of electronic databases, screening of records, duplicate data extraction and assessments of risk of bias in the included studies. However, the research questions posed in methodological studies do not always require the approaches listed above, and guidance is needed on when and how to apply these methods to a methodological study. Even though methodological studies can be conducted on qualitative or mixed methods research, this paper focuses on and draws examples exclusively from quantitative research.

The objectives of this paper are to provide some insights on how to conduct methodological studies so that there is greater consistency between the research questions posed, and the design, analysis and reporting of findings. We provide multiple examples to illustrate concepts and a proposed framework for categorizing methodological studies in quantitative research.

What is a methodological study?

Any study that describes or analyzes methods (design, conduct, analysis or reporting) in published (or unpublished) literature is a methodological study. Consequently, the scope of methodological studies is quite extensive and includes, but is not limited to, topics as diverse as: research question formulation [ 11 ]; adherence to reporting guidelines [ 12 , 13 , 14 ] and consistency in reporting [ 15 ]; approaches to study analysis [ 16 ]; investigating the credibility of analyses [ 17 ]; and studies that synthesize these methodological studies [ 18 ]. While the nomenclature of methodological studies is not uniform, the intents and purposes of these studies remain fairly consistent – to describe or analyze methods in primary or secondary studies. As such, methodological studies may also be classified as a subtype of observational studies.

Parallel to this are experimental studies that compare different methods. Even though they play an important role in informing optimal research methods, experimental methodological studies are beyond the scope of this paper. Examples of such studies include the randomized trials by Buscemi et al., comparing single data extraction to double data extraction [ 19 ], and Carrasco-Labra et al., comparing approaches to presenting findings in Grading of Recommendations, Assessment, Development and Evaluations (GRADE) summary of findings tables [ 20 ]. In these studies, the unit of analysis is the person or groups of individuals applying the methods. We also direct readers to the Studies Within a Trial (SWAT) and Studies Within a Review (SWAR) programme operated through the Hub for Trials Methodology Research, for further reading as a potential useful resource for these types of experimental studies [ 21 ]. Lastly, this paper is not meant to inform the conduct of research using computational simulation and mathematical modeling for which some guidance already exists [ 22 ], or studies on the development of methods using consensus-based approaches.

When should we conduct a methodological study?

Methodological studies occupy a unique niche in health research that allows them to inform methodological advances. Methodological studies should also be conducted as pre-cursors to reporting guideline development, as they provide an opportunity to understand current practices, and help to identify the need for guidance and gaps in methodological or reporting quality. For example, the development of the popular Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) guidelines were preceded by methodological studies identifying poor reporting practices [ 23 , 24 ]. In these instances, after the reporting guidelines are published, methodological studies can also be used to monitor uptake of the guidelines.

These studies can also be conducted to inform the state of the art for design, analysis and reporting practices across different types of health research fields, with the aim of improving research practices, and preventing or reducing research waste. For example, Samaan et al. conducted a scoping review of adherence to different reporting guidelines in health care literature [ 18 ]. Methodological studies can also be used to determine the factors associated with reporting practices. For example, Abbade et al. investigated journal characteristics associated with the use of the Participants, Intervention, Comparison, Outcome, Timeframe (PICOT) format in framing research questions in trials of venous ulcer disease [ 11 ].

How often are methodological studies conducted?

There is no clear answer to this question. Based on a search of PubMed, the use of related terms (“methodological review” and “meta-epidemiological study”) – and therefore, the number of methodological studies – is on the rise. However, many other terms are used to describe methodological studies. There are also many studies that explore design, conduct, analysis or reporting of research reports, but that do not use any specific terms to describe or label their study design in terms of “methodology”. This diversity in nomenclature makes a census of methodological studies elusive. Appropriate terminology and key words for methodological studies are needed to facilitate improved accessibility for end-users.

Why do we conduct methodological studies?

Methodological studies provide information on the design, conduct, analysis or reporting of primary and secondary research and can be used to appraise quality, quantity, completeness, accuracy and consistency of health research. These issues can be explored in specific fields, journals, databases, geographical regions and time periods. For example, Areia et al. explored the quality of reporting of endoscopic diagnostic studies in gastroenterology [ 25 ]; Knol et al. investigated the reporting of p -values in baseline tables in randomized trial published in high impact journals [ 26 ]; Chen et al. describe adherence to the Consolidated Standards of Reporting Trials (CONSORT) statement in Chinese Journals [ 27 ]; and Hopewell et al. describe the effect of editors’ implementation of CONSORT guidelines on reporting of abstracts over time [ 28 ]. Methodological studies provide useful information to researchers, clinicians, editors, publishers and users of health literature. As a result, these studies have been at the cornerstone of important methodological developments in the past two decades and have informed the development of many health research guidelines including the highly cited CONSORT statement [ 5 ].

Where can we find methodological studies?

Methodological studies can be found in most common biomedical bibliographic databases (e.g. Embase, MEDLINE, PubMed, Web of Science). However, the biggest caveat is that methodological studies are hard to identify in the literature due to the wide variety of names used and the lack of comprehensive databases dedicated to them. A handful can be found in the Cochrane Library as “Cochrane Methodology Reviews”, but these studies only cover methodological issues related to systematic reviews. Previous attempts to catalogue all empirical studies of methods used in reviews were abandoned 10 years ago [ 29 ]. In other databases, a variety of search terms may be applied with different levels of sensitivity and specificity.

Some frequently asked questions about methodological studies

In this section, we have outlined responses to questions that might help inform the conduct of methodological studies.

Q: How should I select research reports for my methodological study?

A: Selection of research reports for a methodological study depends on the research question and eligibility criteria. Once a clear research question is set and the nature of literature one desires to review is known, one can then begin the selection process. Selection may begin with a broad search, especially if the eligibility criteria are not apparent. For example, a methodological study of Cochrane Reviews of HIV would not require a complex search as all eligible studies can easily be retrieved from the Cochrane Library after checking a few boxes [ 30 ]. On the other hand, a methodological study of subgroup analyses in trials of gastrointestinal oncology would require a search to find such trials, and further screening to identify trials that conducted a subgroup analysis [ 31 ].

The strategies used for identifying participants in observational studies can apply here. One may use a systematic search to identify all eligible studies. If the number of eligible studies is unmanageable, a random sample of articles can be expected to provide comparable results if it is sufficiently large [ 32 ]. For example, Wilson et al. used a random sample of trials from the Cochrane Stroke Group’s Trial Register to investigate completeness of reporting [ 33 ]. It is possible that a simple random sample would lead to underrepresentation of units (i.e. research reports) that are smaller in number. This is relevant if the investigators wish to compare multiple groups but have too few units in one group. In this case a stratified sample would help to create equal groups. For example, in a methodological study comparing Cochrane and non-Cochrane reviews, Kahale et al. drew random samples from both groups [ 34 ]. Alternatively, systematic or purposeful sampling strategies can be used and we encourage researchers to justify their selected approaches based on the study objective.

Q: How many databases should I search?

A: The number of databases one should search would depend on the approach to sampling, which can include targeting the entire “population” of interest or a sample of that population. If you are interested in including the entire target population for your research question, or drawing a random or systematic sample from it, then a comprehensive and exhaustive search for relevant articles is required. In this case, we recommend using systematic approaches for searching electronic databases (i.e. at least 2 databases with a replicable and time stamped search strategy). The results of your search will constitute a sampling frame from which eligible studies can be drawn.

Alternatively, if your approach to sampling is purposeful, then we recommend targeting the database(s) or data sources (e.g. journals, registries) that include the information you need. For example, if you are conducting a methodological study of high impact journals in plastic surgery and they are all indexed in PubMed, you likely do not need to search any other databases. You may also have a comprehensive list of all journals of interest and can approach your search using the journal names in your database search (or by accessing the journal archives directly from the journal’s website). Even though one could also search journals’ web pages directly, using a database such as PubMed has multiple advantages, such as the use of filters, so the search can be narrowed down to a certain period, or study types of interest. Furthermore, individual journals’ web sites may have different search functionalities, which do not necessarily yield a consistent output.

Q: Should I publish a protocol for my methodological study?

A: A protocol is a description of intended research methods. Currently, only protocols for clinical trials require registration [ 35 ]. Protocols for systematic reviews are encouraged but no formal recommendation exists. The scientific community welcomes the publication of protocols because they help protect against selective outcome reporting, the use of post hoc methodologies to embellish results, and to help avoid duplication of efforts [ 36 ]. While the latter two risks exist in methodological research, the negative consequences may be substantially less than for clinical outcomes. In a sample of 31 methodological studies, 7 (22.6%) referenced a published protocol [ 9 ]. In the Cochrane Library, there are 15 protocols for methodological reviews (21 July 2020). This suggests that publishing protocols for methodological studies is not uncommon.

Authors can consider publishing their study protocol in a scholarly journal as a manuscript. Advantages of such publication include obtaining peer-review feedback about the planned study, and easy retrieval by searching databases such as PubMed. The disadvantages in trying to publish protocols includes delays associated with manuscript handling and peer review, as well as costs, as few journals publish study protocols, and those journals mostly charge article-processing fees [ 37 ]. Authors who would like to make their protocol publicly available without publishing it in scholarly journals, could deposit their study protocols in publicly available repositories, such as the Open Science Framework ( https://osf.io/ ).

Q: How to appraise the quality of a methodological study?

A: To date, there is no published tool for appraising the risk of bias in a methodological study, but in principle, a methodological study could be considered as a type of observational study. Therefore, during conduct or appraisal, care should be taken to avoid the biases common in observational studies [ 38 ]. These biases include selection bias, comparability of groups, and ascertainment of exposure or outcome. In other words, to generate a representative sample, a comprehensive reproducible search may be necessary to build a sampling frame. Additionally, random sampling may be necessary to ensure that all the included research reports have the same probability of being selected, and the screening and selection processes should be transparent and reproducible. To ensure that the groups compared are similar in all characteristics, matching, random sampling or stratified sampling can be used. Statistical adjustments for between-group differences can also be applied at the analysis stage. Finally, duplicate data extraction can reduce errors in assessment of exposures or outcomes.

Q: Should I justify a sample size?

A: In all instances where one is not using the target population (i.e. the group to which inferences from the research report are directed) [ 39 ], a sample size justification is good practice. The sample size justification may take the form of a description of what is expected to be achieved with the number of articles selected, or a formal sample size estimation that outlines the number of articles required to answer the research question with a certain precision and power. Sample size justifications in methodological studies are reasonable in the following instances:

Comparing two groups

Determining a proportion, mean or another quantifier

Determining factors associated with an outcome using regression-based analyses

For example, El Dib et al. computed a sample size requirement for a methodological study of diagnostic strategies in randomized trials, based on a confidence interval approach [ 40 ].

Q: What should I call my study?

A: Other terms which have been used to describe/label methodological studies include “ methodological review ”, “methodological survey” , “meta-epidemiological study” , “systematic review” , “systematic survey”, “meta-research”, “research-on-research” and many others. We recommend that the study nomenclature be clear, unambiguous, informative and allow for appropriate indexing. Methodological study nomenclature that should be avoided includes “ systematic review” – as this will likely be confused with a systematic review of a clinical question. “ Systematic survey” may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a survey using “ systematic” sampling (i.e. a sampling approach using specific intervals to determine who is selected) [ 32 ]. Any of the above meanings of the words “ systematic” may be true for methodological studies and could be potentially misleading. “ Meta-epidemiological study” is ideal for indexing, but not very informative as it describes an entire field. The term “ review ” may point towards an appraisal or “review” of the design, conduct, analysis or reporting (or methodological components) of the targeted research reports, yet it has also been used to describe narrative reviews [ 41 , 42 ]. The term “ survey ” is also in line with the approaches used in many methodological studies [ 9 ], and would be indicative of the sampling procedures of this study design. However, in the absence of guidelines on nomenclature, the term “ methodological study ” is broad enough to capture most of the scenarios of such studies.

Q: Should I account for clustering in my methodological study?

A: Data from methodological studies are often clustered. For example, articles coming from a specific source may have different reporting standards (e.g. the Cochrane Library). Articles within the same journal may be similar due to editorial practices and policies, reporting requirements and endorsement of guidelines. There is emerging evidence that these are real concerns that should be accounted for in analyses [ 43 ]. Some cluster variables are described in the section: “ What variables are relevant to methodological studies?”

A variety of modelling approaches can be used to account for correlated data, including the use of marginal, fixed or mixed effects regression models with appropriate computation of standard errors [ 44 ]. For example, Kosa et al. used generalized estimation equations to account for correlation of articles within journals [ 15 ]. Not accounting for clustering could lead to incorrect p -values, unduly narrow confidence intervals, and biased estimates [ 45 ].

Q: Should I extract data in duplicate?

A: Yes. Duplicate data extraction takes more time but results in less errors [ 19 ]. Data extraction errors in turn affect the effect estimate [ 46 ], and therefore should be mitigated. Duplicate data extraction should be considered in the absence of other approaches to minimize extraction errors. However, much like systematic reviews, this area will likely see rapid new advances with machine learning and natural language processing technologies to support researchers with screening and data extraction [ 47 , 48 ]. However, experience plays an important role in the quality of extracted data and inexperienced extractors should be paired with experienced extractors [ 46 , 49 ].

Q: Should I assess the risk of bias of research reports included in my methodological study?

A : Risk of bias is most useful in determining the certainty that can be placed in the effect measure from a study. In methodological studies, risk of bias may not serve the purpose of determining the trustworthiness of results, as effect measures are often not the primary goal of methodological studies. Determining risk of bias in methodological studies is likely a practice borrowed from systematic review methodology, but whose intrinsic value is not obvious in methodological studies. When it is part of the research question, investigators often focus on one aspect of risk of bias. For example, Speich investigated how blinding was reported in surgical trials [ 50 ], and Abraha et al., investigated the application of intention-to-treat analyses in systematic reviews and trials [ 51 ].

Q: What variables are relevant to methodological studies?

A: There is empirical evidence that certain variables may inform the findings in a methodological study. We outline some of these and provide a brief overview below:

Country: Countries and regions differ in their research cultures, and the resources available to conduct research. Therefore, it is reasonable to believe that there may be differences in methodological features across countries. Methodological studies have reported loco-regional differences in reporting quality [ 52 , 53 ]. This may also be related to challenges non-English speakers face in publishing papers in English.

Authors’ expertise: The inclusion of authors with expertise in research methodology, biostatistics, and scientific writing is likely to influence the end-product. Oltean et al. found that among randomized trials in orthopaedic surgery, the use of analyses that accounted for clustering was more likely when specialists (e.g. statistician, epidemiologist or clinical trials methodologist) were included on the study team [ 54 ]. Fleming et al. found that including methodologists in the review team was associated with appropriate use of reporting guidelines [ 55 ].

Source of funding and conflicts of interest: Some studies have found that funded studies report better [ 56 , 57 ], while others do not [ 53 , 58 ]. The presence of funding would indicate the availability of resources deployed to ensure optimal design, conduct, analysis and reporting. However, the source of funding may introduce conflicts of interest and warrant assessment. For example, Kaiser et al. investigated the effect of industry funding on obesity or nutrition randomized trials and found that reporting quality was similar [ 59 ]. Thomas et al. looked at reporting quality of long-term weight loss trials and found that industry funded studies were better [ 60 ]. Kan et al. examined the association between industry funding and “positive trials” (trials reporting a significant intervention effect) and found that industry funding was highly predictive of a positive trial [ 61 ]. This finding is similar to that of a recent Cochrane Methodology Review by Hansen et al. [ 62 ]

Journal characteristics: Certain journals’ characteristics may influence the study design, analysis or reporting. Characteristics such as journal endorsement of guidelines [ 63 , 64 ], and Journal Impact Factor (JIF) have been shown to be associated with reporting [ 63 , 65 , 66 , 67 ].

Study size (sample size/number of sites): Some studies have shown that reporting is better in larger studies [ 53 , 56 , 58 ].

Year of publication: It is reasonable to assume that design, conduct, analysis and reporting of research will change over time. Many studies have demonstrated improvements in reporting over time or after the publication of reporting guidelines [ 68 , 69 ].

Type of intervention: In a methodological study of reporting quality of weight loss intervention studies, Thabane et al. found that trials of pharmacologic interventions were reported better than trials of non-pharmacologic interventions [ 70 ].

Interactions between variables: Complex interactions between the previously listed variables are possible. High income countries with more resources may be more likely to conduct larger studies and incorporate a variety of experts. Authors in certain countries may prefer certain journals, and journal endorsement of guidelines and editorial policies may change over time.

Q: Should I focus only on high impact journals?

A: Investigators may choose to investigate only high impact journals because they are more likely to influence practice and policy, or because they assume that methodological standards would be higher. However, the JIF may severely limit the scope of articles included and may skew the sample towards articles with positive findings. The generalizability and applicability of findings from a handful of journals must be examined carefully, especially since the JIF varies over time. Even among journals that are all “high impact”, variations exist in methodological standards.

Q: Can I conduct a methodological study of qualitative research?

A: Yes. Even though a lot of methodological research has been conducted in the quantitative research field, methodological studies of qualitative studies are feasible. Certain databases that catalogue qualitative research including the Cumulative Index to Nursing & Allied Health Literature (CINAHL) have defined subject headings that are specific to methodological research (e.g. “research methodology”). Alternatively, one could also conduct a qualitative methodological review; that is, use qualitative approaches to synthesize methodological issues in qualitative studies.

Q: What reporting guidelines should I use for my methodological study?

A: There is no guideline that covers the entire scope of methodological studies. One adaptation of the PRISMA guidelines has been published, which works well for studies that aim to use the entire target population of research reports [ 71 ]. However, it is not widely used (40 citations in 2 years as of 09 December 2019), and methodological studies that are designed as cross-sectional or before-after studies require a more fit-for purpose guideline. A more encompassing reporting guideline for a broad range of methodological studies is currently under development [ 72 ]. However, in the absence of formal guidance, the requirements for scientific reporting should be respected, and authors of methodological studies should focus on transparency and reproducibility.

Q: What are the potential threats to validity and how can I avoid them?

A: Methodological studies may be compromised by a lack of internal or external validity. The main threats to internal validity in methodological studies are selection and confounding bias. Investigators must ensure that the methods used to select articles does not make them differ systematically from the set of articles to which they would like to make inferences. For example, attempting to make extrapolations to all journals after analyzing high-impact journals would be misleading.

Many factors (confounders) may distort the association between the exposure and outcome if the included research reports differ with respect to these factors [ 73 ]. For example, when examining the association between source of funding and completeness of reporting, it may be necessary to account for journals that endorse the guidelines. Confounding bias can be addressed by restriction, matching and statistical adjustment [ 73 ]. Restriction appears to be the method of choice for many investigators who choose to include only high impact journals or articles in a specific field. For example, Knol et al. examined the reporting of p -values in baseline tables of high impact journals [ 26 ]. Matching is also sometimes used. In the methodological study of non-randomized interventional studies of elective ventral hernia repair, Parker et al. matched prospective studies with retrospective studies and compared reporting standards [ 74 ]. Some other methodological studies use statistical adjustments. For example, Zhang et al. used regression techniques to determine the factors associated with missing participant data in trials [ 16 ].

With regard to external validity, researchers interested in conducting methodological studies must consider how generalizable or applicable their findings are. This should tie in closely with the research question and should be explicit. For example. Findings from methodological studies on trials published in high impact cardiology journals cannot be assumed to be applicable to trials in other fields. However, investigators must ensure that their sample truly represents the target sample either by a) conducting a comprehensive and exhaustive search, or b) using an appropriate and justified, randomly selected sample of research reports.

Even applicability to high impact journals may vary based on the investigators’ definition, and over time. For example, for high impact journals in the field of general medicine, Bouwmeester et al. included the Annals of Internal Medicine (AIM), BMJ, the Journal of the American Medical Association (JAMA), Lancet, the New England Journal of Medicine (NEJM), and PLoS Medicine ( n  = 6) [ 75 ]. In contrast, the high impact journals selected in the methodological study by Schiller et al. were BMJ, JAMA, Lancet, and NEJM ( n  = 4) [ 76 ]. Another methodological study by Kosa et al. included AIM, BMJ, JAMA, Lancet and NEJM ( n  = 5). In the methodological study by Thabut et al., journals with a JIF greater than 5 were considered to be high impact. Riado Minguez et al. used first quartile journals in the Journal Citation Reports (JCR) for a specific year to determine “high impact” [ 77 ]. Ultimately, the definition of high impact will be based on the number of journals the investigators are willing to include, the year of impact and the JIF cut-off [ 78 ]. We acknowledge that the term “generalizability” may apply differently for methodological studies, especially when in many instances it is possible to include the entire target population in the sample studied.

Finally, methodological studies are not exempt from information bias which may stem from discrepancies in the included research reports [ 79 ], errors in data extraction, or inappropriate interpretation of the information extracted. Likewise, publication bias may also be a concern in methodological studies, but such concepts have not yet been explored.

A proposed framework

In order to inform discussions about methodological studies, the development of guidance for what should be reported, we have outlined some key features of methodological studies that can be used to classify them. For each of the categories outlined below, we provide an example. In our experience, the choice of approach to completing a methodological study can be informed by asking the following four questions:

What is the aim?

Methodological studies that investigate bias

A methodological study may be focused on exploring sources of bias in primary or secondary studies (meta-bias), or how bias is analyzed. We have taken care to distinguish bias (i.e. systematic deviations from the truth irrespective of the source) from reporting quality or completeness (i.e. not adhering to a specific reporting guideline or norm). An example of where this distinction would be important is in the case of a randomized trial with no blinding. This study (depending on the nature of the intervention) would be at risk of performance bias. However, if the authors report that their study was not blinded, they would have reported adequately. In fact, some methodological studies attempt to capture both “quality of conduct” and “quality of reporting”, such as Richie et al., who reported on the risk of bias in randomized trials of pharmacy practice interventions [ 80 ]. Babic et al. investigated how risk of bias was used to inform sensitivity analyses in Cochrane reviews [ 81 ]. Further, biases related to choice of outcomes can also be explored. For example, Tan et al investigated differences in treatment effect size based on the outcome reported [ 82 ].

Methodological studies that investigate quality (or completeness) of reporting

Methodological studies may report quality of reporting against a reporting checklist (i.e. adherence to guidelines) or against expected norms. For example, Croituro et al. report on the quality of reporting in systematic reviews published in dermatology journals based on their adherence to the PRISMA statement [ 83 ], and Khan et al. described the quality of reporting of harms in randomized controlled trials published in high impact cardiovascular journals based on the CONSORT extension for harms [ 84 ]. Other methodological studies investigate reporting of certain features of interest that may not be part of formally published checklists or guidelines. For example, Mbuagbaw et al. described how often the implications for research are elaborated using the Evidence, Participants, Intervention, Comparison, Outcome, Timeframe (EPICOT) format [ 30 ].

Methodological studies that investigate the consistency of reporting

Sometimes investigators may be interested in how consistent reports of the same research are, as it is expected that there should be consistency between: conference abstracts and published manuscripts; manuscript abstracts and manuscript main text; and trial registration and published manuscript. For example, Rosmarakis et al. investigated consistency between conference abstracts and full text manuscripts [ 85 ].

Methodological studies that investigate factors associated with reporting

In addition to identifying issues with reporting in primary and secondary studies, authors of methodological studies may be interested in determining the factors that are associated with certain reporting practices. Many methodological studies incorporate this, albeit as a secondary outcome. For example, Farrokhyar et al. investigated the factors associated with reporting quality in randomized trials of coronary artery bypass grafting surgery [ 53 ].

Methodological studies that investigate methods

Methodological studies may also be used to describe methods or compare methods, and the factors associated with methods. Muller et al. described the methods used for systematic reviews and meta-analyses of observational studies [ 86 ].

Methodological studies that summarize other methodological studies

Some methodological studies synthesize results from other methodological studies. For example, Li et al. conducted a scoping review of methodological reviews that investigated consistency between full text and abstracts in primary biomedical research [ 87 ].

Methodological studies that investigate nomenclature and terminology

Some methodological studies may investigate the use of names and terms in health research. For example, Martinic et al. investigated the definitions of systematic reviews used in overviews of systematic reviews (OSRs), meta-epidemiological studies and epidemiology textbooks [ 88 ].

Other types of methodological studies

In addition to the previously mentioned experimental methodological studies, there may exist other types of methodological studies not captured here.

What is the design?

Methodological studies that are descriptive

Most methodological studies are purely descriptive and report their findings as counts (percent) and means (standard deviation) or medians (interquartile range). For example, Mbuagbaw et al. described the reporting of research recommendations in Cochrane HIV systematic reviews [ 30 ]. Gohari et al. described the quality of reporting of randomized trials in diabetes in Iran [ 12 ].

Methodological studies that are analytical

Some methodological studies are analytical wherein “analytical studies identify and quantify associations, test hypotheses, identify causes and determine whether an association exists between variables, such as between an exposure and a disease.” [ 89 ] In the case of methodological studies all these investigations are possible. For example, Kosa et al. investigated the association between agreement in primary outcome from trial registry to published manuscript and study covariates. They found that larger and more recent studies were more likely to have agreement [ 15 ]. Tricco et al. compared the conclusion statements from Cochrane and non-Cochrane systematic reviews with a meta-analysis of the primary outcome and found that non-Cochrane reviews were more likely to report positive findings. These results are a test of the null hypothesis that the proportions of Cochrane and non-Cochrane reviews that report positive results are equal [ 90 ].

What is the sampling strategy?

Methodological studies that include the target population

Methodological reviews with narrow research questions may be able to include the entire target population. For example, in the methodological study of Cochrane HIV systematic reviews, Mbuagbaw et al. included all of the available studies ( n  = 103) [ 30 ].

Methodological studies that include a sample of the target population

Many methodological studies use random samples of the target population [ 33 , 91 , 92 ]. Alternatively, purposeful sampling may be used, limiting the sample to a subset of research-related reports published within a certain time period, or in journals with a certain ranking or on a topic. Systematic sampling can also be used when random sampling may be challenging to implement.

What is the unit of analysis?

Methodological studies with a research report as the unit of analysis

Many methodological studies use a research report (e.g. full manuscript of study, abstract portion of the study) as the unit of analysis, and inferences can be made at the study-level. However, both published and unpublished research-related reports can be studied. These may include articles, conference abstracts, registry entries etc.

Methodological studies with a design, analysis or reporting item as the unit of analysis

Some methodological studies report on items which may occur more than once per article. For example, Paquette et al. report on subgroup analyses in Cochrane reviews of atrial fibrillation in which 17 systematic reviews planned 56 subgroup analyses [ 93 ].

This framework is outlined in Fig.  2 .

figure 2

A proposed framework for methodological studies

Conclusions

Methodological studies have examined different aspects of reporting such as quality, completeness, consistency and adherence to reporting guidelines. As such, many of the methodological study examples cited in this tutorial are related to reporting. However, as an evolving field, the scope of research questions that can be addressed by methodological studies is expected to increase.

In this paper we have outlined the scope and purpose of methodological studies, along with examples of instances in which various approaches have been used. In the absence of formal guidance on the design, conduct, analysis and reporting of methodological studies, we have provided some advice to help make methodological studies consistent. This advice is grounded in good contemporary scientific practice. Generally, the research question should tie in with the sampling approach and planned analysis. We have also highlighted the variables that may inform findings from methodological studies. Lastly, we have provided suggestions for ways in which authors can categorize their methodological studies to inform their design and analysis.

Availability of data and materials

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Abbreviations

Consolidated Standards of Reporting Trials

Evidence, Participants, Intervention, Comparison, Outcome, Timeframe

Grading of Recommendations, Assessment, Development and Evaluations

Participants, Intervention, Comparison, Outcome, Timeframe

Preferred Reporting Items of Systematic reviews and Meta-Analyses

Studies Within a Review

Studies Within a Trial

Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374(9683):86–9.

PubMed   Google Scholar  

Chan AW, Song F, Vickers A, Jefferson T, Dickersin K, Gotzsche PC, Krumholz HM, Ghersi D, van der Worp HB. Increasing value and reducing waste: addressing inaccessible research. Lancet. 2014;383(9913):257–66.

PubMed   PubMed Central   Google Scholar  

Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, Schulz KF, Tibshirani R. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383(9912):166–75.

Higgins JP, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD, Savovic J, Schulz KF, Weeks L, Sterne JA. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.

Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet. 2001;357.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009;6(7):e1000100.

Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, Henry DA, Boers M. AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol. 2009;62(10):1013–20.

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. Bmj. 2017;358:j4008.

Lawson DO, Leenus A, Mbuagbaw L. Mapping the nomenclature, methodology, and reporting of studies that review methods: a pilot methodological review. Pilot Feasibility Studies. 2020;6(1):13.

Puljak L, Makaric ZL, Buljan I, Pieper D. What is a meta-epidemiological study? Analysis of published literature indicated heterogeneous study designs and definitions. J Comp Eff Res. 2020.

Abbade LPF, Wang M, Sriganesh K, Jin Y, Mbuagbaw L, Thabane L. The framing of research questions using the PICOT format in randomized controlled trials of venous ulcer disease is suboptimal: a systematic survey. Wound Repair Regen. 2017;25(5):892–900.

Gohari F, Baradaran HR, Tabatabaee M, Anijidani S, Mohammadpour Touserkani F, Atlasi R, Razmgir M. Quality of reporting randomized controlled trials (RCTs) in diabetes in Iran; a systematic review. J Diabetes Metab Disord. 2015;15(1):36.

Wang M, Jin Y, Hu ZJ, Thabane A, Dennis B, Gajic-Veljanoski O, Paul J, Thabane L. The reporting quality of abstracts of stepped wedge randomized trials is suboptimal: a systematic survey of the literature. Contemp Clin Trials Commun. 2017;8:1–10.

Shanthanna H, Kaushal A, Mbuagbaw L, Couban R, Busse J, Thabane L: A cross-sectional study of the reporting quality of pilot or feasibility trials in high-impact anesthesia journals Can J Anaesthesia 2018, 65(11):1180–1195.

Kosa SD, Mbuagbaw L, Borg Debono V, Bhandari M, Dennis BB, Ene G, Leenus A, Shi D, Thabane M, Valvasori S, et al. Agreement in reporting between trial publications and current clinical trial registry in high impact journals: a methodological review. Contemporary Clinical Trials. 2018;65:144–50.

Zhang Y, Florez ID, Colunga Lozano LE, Aloweni FAB, Kennedy SA, Li A, Craigie S, Zhang S, Agarwal A, Lopes LC, et al. A systematic survey on reporting and methods for handling missing participant data for continuous outcomes in randomized controlled trials. J Clin Epidemiol. 2017;88:57–66.

CAS   PubMed   Google Scholar  

Hernández AV, Boersma E, Murray GD, Habbema JD, Steyerberg EW. Subgroup analyses in therapeutic cardiovascular clinical trials: are most of them misleading? Am Heart J. 2006;151(2):257–64.

Samaan Z, Mbuagbaw L, Kosa D, Borg Debono V, Dillenburg R, Zhang S, Fruci V, Dennis B, Bawor M, Thabane L. A systematic scoping review of adherence to reporting guidelines in health care literature. J Multidiscip Healthc. 2013;6:169–88.

Buscemi N, Hartling L, Vandermeer B, Tjosvold L, Klassen TP. Single data extraction generated more errors than double data extraction in systematic reviews. J Clin Epidemiol. 2006;59(7):697–703.

Carrasco-Labra A, Brignardello-Petersen R, Santesso N, Neumann I, Mustafa RA, Mbuagbaw L, Etxeandia Ikobaltzeta I, De Stio C, McCullagh LJ, Alonso-Coello P. Improving GRADE evidence tables part 1: a randomized trial shows improved understanding of content in summary-of-findings tables with a new format. J Clin Epidemiol. 2016;74:7–18.

The Northern Ireland Hub for Trials Methodology Research: SWAT/SWAR Information [ https://www.qub.ac.uk/sites/TheNorthernIrelandNetworkforTrialsMethodologyResearch/SWATSWARInformation/ ]. Accessed 31 Aug 2020.

Chick S, Sánchez P, Ferrin D, Morrice D. How to conduct a successful simulation study. In: Proceedings of the 2003 winter simulation conference: 2003; 2003. p. 66–70.

Google Scholar  

Mulrow CD. The medical review article: state of the science. Ann Intern Med. 1987;106(3):485–8.

Sacks HS, Reitman D, Pagano D, Kupelnick B. Meta-analysis: an update. Mount Sinai J Med New York. 1996;63(3–4):216–24.

CAS   Google Scholar  

Areia M, Soares M, Dinis-Ribeiro M. Quality reporting of endoscopic diagnostic studies in gastrointestinal journals: where do we stand on the use of the STARD and CONSORT statements? Endoscopy. 2010;42(2):138–47.

Knol M, Groenwold R, Grobbee D. P-values in baseline tables of randomised controlled trials are inappropriate but still common in high impact journals. Eur J Prev Cardiol. 2012;19(2):231–2.

Chen M, Cui J, Zhang AL, Sze DM, Xue CC, May BH. Adherence to CONSORT items in randomized controlled trials of integrative medicine for colorectal Cancer published in Chinese journals. J Altern Complement Med. 2018;24(2):115–24.

Hopewell S, Ravaud P, Baron G, Boutron I. Effect of editors' implementation of CONSORT guidelines on the reporting of abstracts in high impact medical journals: interrupted time series analysis. BMJ. 2012;344:e4178.

The Cochrane Methodology Register Issue 2 2009 [ https://cmr.cochrane.org/help.htm ]. Accessed 31 Aug 2020.

Mbuagbaw L, Kredo T, Welch V, Mursleen S, Ross S, Zani B, Motaze NV, Quinlan L. Critical EPICOT items were absent in Cochrane human immunodeficiency virus systematic reviews: a bibliometric analysis. J Clin Epidemiol. 2016;74:66–72.

Barton S, Peckitt C, Sclafani F, Cunningham D, Chau I. The influence of industry sponsorship on the reporting of subgroup analyses within phase III randomised controlled trials in gastrointestinal oncology. Eur J Cancer. 2015;51(18):2732–9.

Setia MS. Methodology series module 5: sampling strategies. Indian J Dermatol. 2016;61(5):505–9.

Wilson B, Burnett P, Moher D, Altman DG, Al-Shahi Salman R. Completeness of reporting of randomised controlled trials including people with transient ischaemic attack or stroke: a systematic review. Eur Stroke J. 2018;3(4):337–46.

Kahale LA, Diab B, Brignardello-Petersen R, Agarwal A, Mustafa RA, Kwong J, Neumann I, Li L, Lopes LC, Briel M, et al. Systematic reviews do not adequately report or address missing outcome data in their analyses: a methodological survey. J Clin Epidemiol. 2018;99:14–23.

De Angelis CD, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, Kotzin S, Laine C, Marusic A, Overbeke AJPM, et al. Is this clinical trial fully registered?: a statement from the International Committee of Medical Journal Editors*. Ann Intern Med. 2005;143(2):146–8.

Ohtake PJ, Childs JD. Why publish study protocols? Phys Ther. 2014;94(9):1208–9.

Rombey T, Allers K, Mathes T, Hoffmann F, Pieper D. A descriptive analysis of the characteristics and the peer review process of systematic review protocols published in an open peer review journal from 2012 to 2017. BMC Med Res Methodol. 2019;19(1):57.

Grimes DA, Schulz KF. Bias and causal associations in observational research. Lancet. 2002;359(9302):248–52.

Porta M (ed.): A dictionary of epidemiology, 5th edn. Oxford: Oxford University Press, Inc.; 2008.

El Dib R, Tikkinen KAO, Akl EA, Gomaa HA, Mustafa RA, Agarwal A, Carpenter CR, Zhang Y, Jorge EC, Almeida R, et al. Systematic survey of randomized trials evaluating the impact of alternative diagnostic strategies on patient-important outcomes. J Clin Epidemiol. 2017;84:61–9.

Helzer JE, Robins LN, Taibleson M, Woodruff RA Jr, Reich T, Wish ED. Reliability of psychiatric diagnosis. I. a methodological review. Arch Gen Psychiatry. 1977;34(2):129–33.

Chung ST, Chacko SK, Sunehag AL, Haymond MW. Measurements of gluconeogenesis and Glycogenolysis: a methodological review. Diabetes. 2015;64(12):3996–4010.

CAS   PubMed   PubMed Central   Google Scholar  

Sterne JA, Juni P, Schulz KF, Altman DG, Bartlett C, Egger M. Statistical methods for assessing the influence of study characteristics on treatment effects in 'meta-epidemiological' research. Stat Med. 2002;21(11):1513–24.

Moen EL, Fricano-Kugler CJ, Luikart BW, O’Malley AJ. Analyzing clustered data: why and how to account for multiple observations nested within a study participant? PLoS One. 2016;11(1):e0146721.

Zyzanski SJ, Flocke SA, Dickinson LM. On the nature and analysis of clustered data. Ann Fam Med. 2004;2(3):199–200.

Mathes T, Klassen P, Pieper D. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review. BMC Med Res Methodol. 2017;17(1):152.

Bui DDA, Del Fiol G, Hurdle JF, Jonnalagadda S. Extractive text summarization system to aid data extraction from full text in systematic review development. J Biomed Inform. 2016;64:265–72.

Bui DD, Del Fiol G, Jonnalagadda S. PDF text classification to leverage information extraction from publication reports. J Biomed Inform. 2016;61:141–8.

Maticic K, Krnic Martinic M, Puljak L. Assessment of reporting quality of abstracts of systematic reviews with meta-analysis using PRISMA-A and discordance in assessments between raters without prior experience. BMC Med Res Methodol. 2019;19(1):32.

Speich B. Blinding in surgical randomized clinical trials in 2015. Ann Surg. 2017;266(1):21–2.

Abraha I, Cozzolino F, Orso M, Marchesi M, Germani A, Lombardo G, Eusebi P, De Florio R, Luchetta ML, Iorio A, et al. A systematic review found that deviations from intention-to-treat are common in randomized trials and systematic reviews. J Clin Epidemiol. 2017;84:37–46.

Zhong Y, Zhou W, Jiang H, Fan T, Diao X, Yang H, Min J, Wang G, Fu J, Mao B. Quality of reporting of two-group parallel randomized controlled clinical trials of multi-herb formulae: A survey of reports indexed in the Science Citation Index Expanded. Eur J Integrative Med. 2011;3(4):e309–16.

Farrokhyar F, Chu R, Whitlock R, Thabane L. A systematic review of the quality of publications reporting coronary artery bypass grafting trials. Can J Surg. 2007;50(4):266–77.

Oltean H, Gagnier JJ. Use of clustering analysis in randomized controlled trials in orthopaedic surgery. BMC Med Res Methodol. 2015;15:17.

Fleming PS, Koletsi D, Pandis N. Blinded by PRISMA: are systematic reviewers focusing on PRISMA and ignoring other guidelines? PLoS One. 2014;9(5):e96407.

Balasubramanian SP, Wiener M, Alshameeri Z, Tiruvoipati R, Elbourne D, Reed MW. Standards of reporting of randomized controlled trials in general surgery: can we do better? Ann Surg. 2006;244(5):663–7.

de Vries TW, van Roon EN. Low quality of reporting adverse drug reactions in paediatric randomised controlled trials. Arch Dis Child. 2010;95(12):1023–6.

Borg Debono V, Zhang S, Ye C, Paul J, Arya A, Hurlburt L, Murthy Y, Thabane L. The quality of reporting of RCTs used within a postoperative pain management meta-analysis, using the CONSORT statement. BMC Anesthesiol. 2012;12:13.

Kaiser KA, Cofield SS, Fontaine KR, Glasser SP, Thabane L, Chu R, Ambrale S, Dwary AD, Kumar A, Nayyar G, et al. Is funding source related to study reporting quality in obesity or nutrition randomized control trials in top-tier medical journals? Int J Obes. 2012;36(7):977–81.

Thomas O, Thabane L, Douketis J, Chu R, Westfall AO, Allison DB. Industry funding and the reporting quality of large long-term weight loss trials. Int J Obes. 2008;32(10):1531–6.

Khan NR, Saad H, Oravec CS, Rossi N, Nguyen V, Venable GT, Lillard JC, Patel P, Taylor DR, Vaughn BN, et al. A review of industry funding in randomized controlled trials published in the neurosurgical literature-the elephant in the room. Neurosurgery. 2018;83(5):890–7.

Hansen C, Lundh A, Rasmussen K, Hrobjartsson A. Financial conflicts of interest in systematic reviews: associations with results, conclusions, and methodological quality. Cochrane Database Syst Rev. 2019;8:Mr000047.

Kiehna EN, Starke RM, Pouratian N, Dumont AS. Standards for reporting randomized controlled trials in neurosurgery. J Neurosurg. 2011;114(2):280–5.

Liu LQ, Morris PJ, Pengel LH. Compliance to the CONSORT statement of randomized controlled trials in solid organ transplantation: a 3-year overview. Transpl Int. 2013;26(3):300–6.

Bala MM, Akl EA, Sun X, Bassler D, Mertz D, Mejza F, Vandvik PO, Malaga G, Johnston BC, Dahm P, et al. Randomized trials published in higher vs. lower impact journals differ in design, conduct, and analysis. J Clin Epidemiol. 2013;66(3):286–95.

Lee SY, Teoh PJ, Camm CF, Agha RA. Compliance of randomized controlled trials in trauma surgery with the CONSORT statement. J Trauma Acute Care Surg. 2013;75(4):562–72.

Ziogas DC, Zintzaras E. Analysis of the quality of reporting of randomized controlled trials in acute and chronic myeloid leukemia, and myelodysplastic syndromes as governed by the CONSORT statement. Ann Epidemiol. 2009;19(7):494–500.

Alvarez F, Meyer N, Gourraud PA, Paul C. CONSORT adoption and quality of reporting of randomized controlled trials: a systematic analysis in two dermatology journals. Br J Dermatol. 2009;161(5):1159–65.

Mbuagbaw L, Thabane M, Vanniyasingam T, Borg Debono V, Kosa S, Zhang S, Ye C, Parpia S, Dennis BB, Thabane L. Improvement in the quality of abstracts in major clinical journals since CONSORT extension for abstracts: a systematic review. Contemporary Clin trials. 2014;38(2):245–50.

Thabane L, Chu R, Cuddy K, Douketis J. What is the quality of reporting in weight loss intervention studies? A systematic review of randomized controlled trials. Int J Obes. 2007;31(10):1554–9.

Murad MH, Wang Z. Guidelines for reporting meta-epidemiological methodology research. Evidence Based Med. 2017;22(4):139.

METRIC - MEthodological sTudy ReportIng Checklist: guidelines for reporting methodological studies in health research [ http://www.equator-network.org/library/reporting-guidelines-under-development/reporting-guidelines-under-development-for-other-study-designs/#METRIC ]. Accessed 31 Aug 2020.

Jager KJ, Zoccali C, MacLeod A, Dekker FW. Confounding: what it is and how to deal with it. Kidney Int. 2008;73(3):256–60.

Parker SG, Halligan S, Erotocritou M, Wood CPJ, Boulton RW, Plumb AAO, Windsor ACJ, Mallett S. A systematic methodological review of non-randomised interventional studies of elective ventral hernia repair: clear definitions and a standardised minimum dataset are needed. Hernia. 2019.

Bouwmeester W, Zuithoff NPA, Mallett S, Geerlings MI, Vergouwe Y, Steyerberg EW, Altman DG, Moons KGM. Reporting and methods in clinical prediction research: a systematic review. PLoS Med. 2012;9(5):1–12.

Schiller P, Burchardi N, Niestroj M, Kieser M. Quality of reporting of clinical non-inferiority and equivalence randomised trials--update and extension. Trials. 2012;13:214.

Riado Minguez D, Kowalski M, Vallve Odena M, Longin Pontzen D, Jelicic Kadic A, Jeric M, Dosenovic S, Jakus D, Vrdoljak M, Poklepovic Pericic T, et al. Methodological and reporting quality of systematic reviews published in the highest ranking journals in the field of pain. Anesth Analg. 2017;125(4):1348–54.

Thabut G, Estellat C, Boutron I, Samama CM, Ravaud P. Methodological issues in trials assessing primary prophylaxis of venous thrombo-embolism. Eur Heart J. 2005;27(2):227–36.

Puljak L, Riva N, Parmelli E, González-Lorenzo M, Moja L, Pieper D. Data extraction methods: an analysis of internal reporting discrepancies in single manuscripts and practical advice. J Clin Epidemiol. 2020;117:158–64.

Ritchie A, Seubert L, Clifford R, Perry D, Bond C. Do randomised controlled trials relevant to pharmacy meet best practice standards for quality conduct and reporting? A systematic review. Int J Pharm Pract. 2019.

Babic A, Vuka I, Saric F, Proloscic I, Slapnicar E, Cavar J, Pericic TP, Pieper D, Puljak L. Overall bias methods and their use in sensitivity analysis of Cochrane reviews were not consistent. J Clin Epidemiol. 2019.

Tan A, Porcher R, Crequit P, Ravaud P, Dechartres A. Differences in treatment effect size between overall survival and progression-free survival in immunotherapy trials: a Meta-epidemiologic study of trials with results posted at ClinicalTrials.gov. J Clin Oncol. 2017;35(15):1686–94.

Croitoru D, Huang Y, Kurdina A, Chan AW, Drucker AM. Quality of reporting in systematic reviews published in dermatology journals. Br J Dermatol. 2020;182(6):1469–76.

Khan MS, Ochani RK, Shaikh A, Vaduganathan M, Khan SU, Fatima K, Yamani N, Mandrola J, Doukky R, Krasuski RA: Assessing the Quality of Reporting of Harms in Randomized Controlled Trials Published in High Impact Cardiovascular Journals. Eur Heart J Qual Care Clin Outcomes 2019.

Rosmarakis ES, Soteriades ES, Vergidis PI, Kasiakou SK, Falagas ME. From conference abstract to full paper: differences between data presented in conferences and journals. FASEB J. 2005;19(7):673–80.

Mueller M, D’Addario M, Egger M, Cevallos M, Dekkers O, Mugglin C, Scott P. Methods to systematically review and meta-analyse observational studies: a systematic scoping review of recommendations. BMC Med Res Methodol. 2018;18(1):44.

Li G, Abbade LPF, Nwosu I, Jin Y, Leenus A, Maaz M, Wang M, Bhatt M, Zielinski L, Sanger N, et al. A scoping review of comparisons between abstracts and full reports in primary biomedical research. BMC Med Res Methodol. 2017;17(1):181.

Krnic Martinic M, Pieper D, Glatt A, Puljak L. Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks. BMC Med Res Methodol. 2019;19(1):203.

Analytical study [ https://medical-dictionary.thefreedictionary.com/analytical+study ]. Accessed 31 Aug 2020.

Tricco AC, Tetzlaff J, Pham B, Brehaut J, Moher D. Non-Cochrane vs. Cochrane reviews were twice as likely to have positive conclusion statements: cross-sectional study. J Clin Epidemiol. 2009;62(4):380–6 e381.

Schalken N, Rietbergen C. The reporting quality of systematic reviews and Meta-analyses in industrial and organizational psychology: a systematic review. Front Psychol. 2017;8:1395.

Ranker LR, Petersen JM, Fox MP. Awareness of and potential for dependent error in the observational epidemiologic literature: A review. Ann Epidemiol. 2019;36:15–9 e12.

Paquette M, Alotaibi AM, Nieuwlaat R, Santesso N, Mbuagbaw L. A meta-epidemiological study of subgroup analyses in cochrane systematic reviews of atrial fibrillation. Syst Rev. 2019;8(1):241.

Download references

Acknowledgements

This work did not receive any dedicated funding.

Author information

Authors and affiliations.

Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada

Lawrence Mbuagbaw, Daeria O. Lawson & Lehana Thabane

Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph’s Healthcare—Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario, L8N 4A6, Canada

Lawrence Mbuagbaw & Lehana Thabane

Centre for the Development of Best Practices in Health, Yaoundé, Cameroon

Lawrence Mbuagbaw

Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000, Zagreb, Croatia

Livia Puljak

Department of Epidemiology and Biostatistics, School of Public Health – Bloomington, Indiana University, Bloomington, IN, 47405, USA

David B. Allison

Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON, Canada

Lehana Thabane

Centre for Evaluation of Medicine, St. Joseph’s Healthcare-Hamilton, Hamilton, ON, Canada

Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON, Canada

You can also search for this author in PubMed   Google Scholar

Contributions

LM conceived the idea and drafted the outline and paper. DOL and LT commented on the idea and draft outline. LM, LP and DOL performed literature searches and data extraction. All authors (LM, DOL, LT, LP, DBA) reviewed several draft versions of the manuscript and approved the final manuscript.

Corresponding author

Correspondence to Lawrence Mbuagbaw .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

DOL, DBA, LM, LP and LT are involved in the development of a reporting guideline for methodological studies.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Mbuagbaw, L., Lawson, D.O., Puljak, L. et al. A tutorial on methodological studies: the what, when, how and why. BMC Med Res Methodol 20 , 226 (2020). https://doi.org/10.1186/s12874-020-01107-7

Download citation

Received : 27 May 2020

Accepted : 27 August 2020

Published : 07 September 2020

DOI : https://doi.org/10.1186/s12874-020-01107-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Methodological study
  • Meta-epidemiology
  • Research methods
  • Research-on-research

BMC Medical Research Methodology

ISSN: 1471-2288

articles related to research methodology

  • Research Process
  • Manuscript Preparation
  • Manuscript Review
  • Publication Process
  • Publication Recognition
  • Language Editing Services
  • Translation Services

Elsevier QRcode Wechat

Choosing the Right Research Methodology: A Guide for Researchers

  • 3 minute read

Table of Contents

Choosing an optimal research methodology is crucial for the success of any research project. The methodology you select will determine the type of data you collect, how you collect it, and how you analyse it. Understanding the different types of research methods available along with their strengths and weaknesses, is thus imperative to make an informed decision.

Understanding different research methods:

There are several research methods available depending on the type of study you are conducting, i.e., whether it is laboratory-based, clinical, epidemiological, or survey based . Some common methodologies include qualitative research, quantitative research, experimental research, survey-based research, and action research. Each method can be opted for and modified, depending on the type of research hypotheses and objectives.

Qualitative vs quantitative research:

When deciding on a research methodology, one of the key factors to consider is whether your research will be qualitative or quantitative. Qualitative research is used to understand people’s experiences, concepts, thoughts, or behaviours . Quantitative research, on the contrary, deals with numbers, graphs, and charts, and is used to test or confirm hypotheses, assumptions, and theories. 

Qualitative research methodology:

Qualitative research is often used to examine issues that are not well understood, and to gather additional insights on these topics. Qualitative research methods include open-ended survey questions, observations of behaviours described through words, and reviews of literature that has explored similar theories and ideas. These methods are used to understand how language is used in real-world situations, identify common themes or overarching ideas, and describe and interpret various texts. Data analysis for qualitative research typically includes discourse analysis, thematic analysis, and textual analysis. 

Quantitative research methodology:

The goal of quantitative research is to test hypotheses, confirm assumptions and theories, and determine cause-and-effect relationships. Quantitative research methods include experiments, close-ended survey questions, and countable and numbered observations. Data analysis for quantitative research relies heavily on statistical methods.

Analysing qualitative vs quantitative data:

The methods used for data analysis also differ for qualitative and quantitative research. As mentioned earlier, quantitative data is generally analysed using statistical methods and does not leave much room for speculation. It is more structured and follows a predetermined plan. In quantitative research, the researcher starts with a hypothesis and uses statistical methods to test it. Contrarily, methods used for qualitative data analysis can identify patterns and themes within the data, rather than provide statistical measures of the data. It is an iterative process, where the researcher goes back and forth trying to gauge the larger implications of the data through different perspectives and revising the analysis if required.

When to use qualitative vs quantitative research:

The choice between qualitative and quantitative research will depend on the gap that the research project aims to address, and specific objectives of the study. If the goal is to establish facts about a subject or topic, quantitative research is an appropriate choice. However, if the goal is to understand people’s experiences or perspectives, qualitative research may be more suitable. 

Conclusion:

In conclusion, an understanding of the different research methods available, their applicability, advantages, and disadvantages is essential for making an informed decision on the best methodology for your project. If you need any additional guidance on which research methodology to opt for, you can head over to Elsevier Author Services (EAS). EAS experts will guide you throughout the process and help you choose the perfect methodology for your research goals.

Why is data validation important in research

Why is data validation important in research?

Importance-of-Data-Collection

When Data Speak, Listen: Importance of Data Collection and Analysis Methods

You may also like.

what is a descriptive research design

Descriptive Research Design and Its Myriad Uses

Doctor doing a Biomedical Research Paper

Five Common Mistakes to Avoid When Writing a Biomedical Research Paper

Writing in Environmental Engineering

Making Technical Writing in Environmental Engineering Accessible

Risks of AI-assisted Academic Writing

To Err is Not Human: The Dangers of AI-assisted Academic Writing

Importance-of-Data-Collection

Writing a good review article

Scholarly Sources What are They and Where can You Find Them

Scholarly Sources: What are They and Where can You Find Them?

Input your search keywords and press Enter.

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Exploring Research Methodology: Review Article

Profile image of International Journal of Research & Review (IJRR)

2019, https://www.ijrrjournal.com/IJRR_Vol.6_Issue.3_March2019/Abstract_IJRR0011.html

Research methodology is a way to systematically solve the research problem. It may be understood as a science of studying how research is done scientifically. In it we study the various steps that are generally adopted by a researcher in studying his research problem along with the logic behind them. It is necessary for the researcher to know not only the research methods/techniques but also the methodology. Researchers not only need to know how to develop certain indices or tests, how to calculate the mean, the mode, the median or the standard deviation or chi-square, how to apply particular research techniques, but they also need to know which of these methods or techniques, are relevant and which are not, and what would they mean and indicate and why. Researchers also need to understand the assumptions underlying various techniques and they need to know the criteria by which they can decide that certain techniques and procedures will be applicable to certain problems and others will not. All this means that it is necessary for the researcher to design his methodology for his problem as the same may differ from problem to problem.

Related Papers

Scholarly Communication and the Publish or Perish Pressures of Academia A volume in the Advances in Knowledge Acquisition, Transfer, and Management (AKATM) Book Series

Dr. Naresh A . Babariya , Alka V. Gohel

The most important of research methodology in research study it is necessary for a researcher to design a methodology for the problem chosen and systematically solves the problem. Formulation of the research problem is to decide on a broad subject area on which has thorough knowledge and second important responsibility in research is to compare findings, it is literature review plays an extremely important role. The literature review is part of the research process and makes a valuable contribution to almost every operational step. A good research design provides information concerning with the selection of the sample population treatments and controls to be imposed and research work cannot be undertaken without sampling. Collecting the data and create data structure as organizing the data, analyzing the data help of different statistical method, summarizing the analysis, and using these results for making judgments, decisions and predictions. Keywords: Research Problem, Economical Plan, Developing Ideas, Research Strategy, Sampling Design, Theoretical Procedures, Experimental Studies, Numerical Schemes, Statistical Techniques.

articles related to research methodology

Xochitl Ortiz

The authors felt during their several years of teaching experience that students fail to understand the books written on Research Methodology because generally they are written in technical language. Since this course is not taught before the Master’s degree, the students are not familiar with its vocabulary, methodology and course contents. The authors have made an attempt to write it in very non- technical language. It has been attempted that students who try to understand the research methodology through self-learning may also find it easy. The chapters are written with that approach. Even those students who intend to attain high level of knowledge of the research methodology in social sciences will find this book very helpful in understanding the basic concepts before they read any book on research methodology. This book is useful those students who offer the Research Methodology at Post Graduation and M.Phil. Level. This book is also very useful for Ph.D. Course Work examinations.

Anil Jharotia

Research is an important activity of any nation and societies for generating the information to its developments. Robust collection of qualitative information helps in the development of the any nations. Research & Development is an important tool for acquiring new knowledge in any field of human survival. Various type of problems and questions need to use research methodology depend on the rationale of researchers. How to use the research for finding answers of any research questions/problems.

Scholars Bulletin

Wahied Khawar Balwan

Research is one of the means by which we seek to discover the truth. It is based upon the tacit assumption that the world is a cosmos whose happenings have causes and are controlled by forces and relationships that can be expressed as laws and principles. Discovery of these controls of nature provides us with a hunting license to search for ways of controlling our environment. To search for truth in a scientific way research methodology provides principles to refine our common beliefs through research activity that establishes rules of logical and appropriate reasoning. In order to apply the scientific research methodology properly in research work, the researcher must have a clear basic concept of research methodology & methods that will ensure to find potential research results. This paper deals with the conceptuality of the research methodology like the meaning of the research, objectives of research, motivation in research and types of research. The basic approaches to research,...

IJRASET Publication

The term Research means a systematic way to investigate new facts or analyse the existing information to update the knowledge. Research methodology refers to the Science of Understanding how the solution to research problem can be obtained systematically. It can also be termed as the specific methods used to conduct the research. This paper presents the detail overview of different research methods. The research methods and methodology differ from problem to problem. In order to conduct a research, it is important for a researcher to not only have a good knowledge on Research methods but also on the research methodology. Researchers need to develop a Research design which acts like a blue print for conducting the research. This paper provides the analysis of different research methods and how to choose the research method based on the application Index Terms-Methodology, Research Process, Pure Research, Qualitative methods, and Quantitative methods I. INTRODUCTION Research is very important in order to progress. The term research is a combination of two words "Re-again, Search-find out". It an art of finding solution to the problem. According to the Oxford Advanced American Dictionary research is defined as "A careful study of a subject, especially in order to discover new facts or information about it" [1]P.M. Cook referred as "Research is an honest, exhaustive, intelligent searching for facts and their meanings or implications with reference to a given problem. The product or findings of a given piece of research should be an authentic, verifiable contribution to knowledge in the field studied." [2] Methodology refers to the organized, theoretical investigation of the methods used in the research. It includes the analysis of the research methods along with the ideologies related to the area of investigation. Technically it includes paradigm, research model, and the research techniques. [3] Research Methodology is art of studying how research is done systematically. It aims to explain on how to conduct a research, what are the problems that need to be answered and what are the pitfalls while conducting a research.

Khamis S Moh'd

Yuanita Damayanti

Research is any original and systematic investigation undertaken to increase knowledge and understanding and to establish facts and principles. It comprises the creation of ideas and generation of new knowledge that lead to new and improved insights and the development of new material, devices, products and processes. The word " research " perhaps originates from the old French word recerchier that meant to 'search again'. It implicitly assumes that the earlier search was not exhaustive and complete and hence a repeated search is called for.

RESEARCH METHODOLOGY (Basis in the Management and Business Process)

Boyke Hatman

Research method is a method or scientific technique to obtain data with specific purposes and uses. The scientific means or techniques in question are where research activities are carried out based on scientific characteristics. This is a set of rules, activities, and procedures used by the perpetrators. The methodology is also a theoretical analysis of a method or method. Research is a systematic investigation to increase knowledge, as well as systematic and organized efforts to investigate certain problems that require answers. The nature of research can be understood by studying various aspects that encourage research to do it properly. Every person has a different motivation, including influenced by their goals and profession. Motivation and research objectives in general are basically the same, namely research is a reflection of the desire of people who always try to know something. The desire to acquire and develop knowledge is a basic human need which is generally a motivation to conduct research. The validity of research data can be obtained by using valid instruments, using appropriate and adequate amounts of data sources, as well as correct data collection and analysis methods. To obtain reliable data, the instrument must be reliable and the research carried out repeatedly. Furthermore, to obtain objective data, the number of sample data sources approaches the population.Each study has specific goals and uses. In general, there are three types of research objectives, namely the nature of discovery, verification and development. The finding means that the data obtained from research is truly new data that has not been previously known.

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

RELATED PAPERS

Durga Prasad

collins wetiatia

Raphael Sena

Saeed Anwar

Thabit Alomari

Wafae Barkani

Vikalpa: The Journal for Decision Makers

Vivek Patkar

MD Ashikur Rahman

Ngoc Nguyen

Dr. Awais H. Gillani

caroline tobing

Anush Ramanujan

Amina Belabed

Abla BENBELLAL

Piumi Tillekerathne

Second Language Learning and Teaching

Magdalena Walenta

Dr.Larry Adams

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

land-logo

Article Menu

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Research on quantitative analysis methods for the spatial characteristics of traditional villages based on three-dimensional point cloud data: a case study of liukeng village, jiangxi, china.

articles related to research methodology

Share and Cite

Li, Z.; Wang, T.; Sun, S. Research on Quantitative Analysis Methods for the Spatial Characteristics of Traditional Villages Based on Three-Dimensional Point Cloud Data: A Case Study of Liukeng Village, Jiangxi, China. Land 2024 , 13 , 1261. https://doi.org/10.3390/land13081261

Li Z, Wang T, Sun S. Research on Quantitative Analysis Methods for the Spatial Characteristics of Traditional Villages Based on Three-Dimensional Point Cloud Data: A Case Study of Liukeng Village, Jiangxi, China. Land . 2024; 13(8):1261. https://doi.org/10.3390/land13081261

Li, Zhe, Tianlian Wang, and Su Sun. 2024. "Research on Quantitative Analysis Methods for the Spatial Characteristics of Traditional Villages Based on Three-Dimensional Point Cloud Data: A Case Study of Liukeng Village, Jiangxi, China" Land 13, no. 8: 1261. https://doi.org/10.3390/land13081261

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

  • UNC Libraries
  • Course Guides
  • ENGL105 - Scholarly Articles 101

Types of Scholarly Articles

Engl105 - scholarly articles 101: types of scholarly articles.

  • Peer-review
  • Other Types of Sources

Research Articles

Review articles, tips & practice.

  • How to Read a Scholarly Article
  • Extend Your Knowledge

Sometimes a professor might ask you to find original research or may ask you to not use literature reviews/systematic reviews as sources, but what do those terms mean? How can we tell if our potential source meets our professor's criteria?

In a research article, an original study is conducted by the authors. They collect and analyze data, sharing their methods and results, and then draw conclusions from their analysis. The kind of study performed can vary (surveys, interviews, experiments, etc.), but in all cases, data is analyzed and a new argument is put forth. Research articles are considered primary sources.

  • Note: research articles will often contain a section titled "literature review" - this is a section that looks at other existing research as a foundation for their new idea. Simply seeing the words "literature review" does not automatically mean an article is a review article- it is important to look closer

Below is a screenshot of the abstract of the article Effectiveness of Health Coaching in Diabetes Control and Lifestyle Improvement: A Randomized-Controlled Trial , with some words underlined that let us know that a study was conducted and that this is a research article.

A screenshot of an abstract. The words "study," "controlled trial," "114 diabetic patients," "6-month period," "intervention group" are underlined

A review article gathers multiple research articles on a certain topic, summarizing and analyzing the arguments made in those articles. A review article might highlight patterns or gaps in the research, might show support for existing theories, or suggest new directions for research, but does not conduct original research on a subject. Review articles can be a great place to get an overview of the existing research on a subject. A review article is a secondary source.

  • Looking in the reference section of a literature or systematic review can be a good place to find original research studies.

Below is a screenshot of the abstract of the article The Effect of Dietary Glycaemic Index on Glycaemia in Patients with Type 2 Diabetes: A Systematic Review and Meta-Analysis of Randomized Controlled Trials , with words underlined that clue us in that this is a review article.

A screenshot of an abstract. The words "systematic review," "meta-analysis," "selected from a number of databases" are underlined

Tips for identifying article type

Start by looking at the abstract to determine if a source might be a research article or a review article. If you're not sure after looking at the abstract, find the methods section for the source - what methods did the authors use? If they mention searching databases, it's most likely a review and if they mention conducting an experiment, survey, interview, etc., it's most likely a research article. If you're still unsure, feel free to reach out to a librarian and ask ! 

Let's Practice

Below are two different scholarly articles. Look at the abstract and the methods section- Which one is an original research study? Which one is a literature review?

  • Article 1- Research or Review?
  • Article 2- Research or Review?

Research & Instruction Associate

Profile Photo

  • << Previous: Anatomy of a Scholarly Article
  • Next: How to Read a Scholarly Article >>
  • Last Updated: Aug 7, 2024 2:00 PM
  • URL: https://guides.lib.unc.edu/scholarly-articles-101

Research Methods: Issues and Research Direction

  • September 2020
  • Business and Management Research 9(3):46-55

Linus chukwunenye Osuagwu at Veritas University Abuja

  • Veritas University Abuja

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Roshen Rodrigo

  • Danielle Marie Agnello

George Balaskas

  • Thobekile Octavia Ndhlovu

Sijuwade Adedayo Ogunsola

  • Asheri Mwaipungu
  • Frank Philipo
  • Agnes Nzali

Benjamin Adelwini Bugri

  • Opare Francis Adu

Madhura Tajan Fernando

  • Sheryl Crissie Robert

Melor Md. Yunus

  • Gayatri Panda

Maria sá

  • Andrew Abbott
  • Atika Dwi Astuti

Intansari Nurjannah

  • Pablo Livacic-Rojas

María Paula Fernández

  • Feliciano Ordóñez

Gerald Zaltman

  • J. Paul Peter
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

9 Best Marketing Research Methods to Know Your Buyer Better [+ Examples]

Ramona Sukhraj

Published: August 08, 2024

One of the most underrated skills you can have as a marketer is marketing research — which is great news for this unapologetic cyber sleuth.

marketer using marketer research methods to better understand her buyer personas

From brand design and product development to buyer personas and competitive analysis, I’ve researched a number of initiatives in my decade-long marketing career.

And let me tell you: having the right marketing research methods in your toolbox is a must.

Market research is the secret to crafting a strategy that will truly help you accomplish your goals. The good news is there is no shortage of options.

How to Choose a Marketing Research Method

Thanks to the Internet, we have more marketing research (or market research) methods at our fingertips than ever, but they’re not all created equal. Let’s quickly go over how to choose the right one.

articles related to research methodology

Free Market Research Kit

5 Research and Planning Templates + a Free Guide on How to Use Them in Your Market Research

  • SWOT Analysis Template
  • Survey Template
  • Focus Group Template

Download Free

All fields are required.

You're all set!

Click this link to access this resource at any time.

1. Identify your objective.

What are you researching? Do you need to understand your audience better? How about your competition? Or maybe you want to know more about your customer’s feelings about a specific product.

Before starting your research, take some time to identify precisely what you’re looking for. This could be a goal you want to reach, a problem you need to solve, or a question you need to answer.

For example, an objective may be as foundational as understanding your ideal customer better to create new buyer personas for your marketing agency (pause for flashbacks to my former life).

Or if you’re an organic sode company, it could be trying to learn what flavors people are craving.

2. Determine what type of data and research you need.

Next, determine what data type will best answer the problems or questions you identified. There are primarily two types: qualitative and quantitative. (Sound familiar, right?)

  • Qualitative Data is non-numerical information, like subjective characteristics, opinions, and feelings. It’s pretty open to interpretation and descriptive, but it’s also harder to measure. This type of data can be collected through interviews, observations, and open-ended questions.
  • Quantitative Data , on the other hand, is numerical information, such as quantities, sizes, amounts, or percentages. It’s measurable and usually pretty hard to argue with, coming from a reputable source. It can be derived through surveys, experiments, or statistical analysis.

Understanding the differences between qualitative and quantitative data will help you pinpoint which research methods will yield the desired results.

For instance, thinking of our earlier examples, qualitative data would usually be best suited for buyer personas, while quantitative data is more useful for the soda flavors.

However, truth be told, the two really work together.

Qualitative conclusions are usually drawn from quantitative, numerical data. So, you’ll likely need both to get the complete picture of your subject.

For example, if your quantitative data says 70% of people are Team Black and only 30% are Team Green — Shout out to my fellow House of the Dragon fans — your qualitative data will say people support Black more than Green.

(As they should.)

Primary Research vs Secondary Research

You’ll also want to understand the difference between primary and secondary research.

Primary research involves collecting new, original data directly from the source (say, your target market). In other words, it’s information gathered first-hand that wasn’t found elsewhere.

Some examples include conducting experiments, surveys, interviews, observations, or focus groups.

Meanwhile, secondary research is the analysis and interpretation of existing data collected from others. Think of this like what we used to do for school projects: We would read a book, scour the internet, or pull insights from others to work from.

So, which is better?

Personally, I say any research is good research, but if you have the time and resources, primary research is hard to top. With it, you don’t have to worry about your source's credibility or how relevant it is to your specific objective.

You are in full control and best equipped to get the reliable information you need.

3. Put it all together.

Once you know your objective and what kind of data you want, you’re ready to select your marketing research method.

For instance, let’s say you’re a restaurant trying to see how attendees felt about the Speed Dating event you hosted last week.

You shouldn’t run a field experiment or download a third-party report on speed dating events; those would be useless to you. You need to conduct a survey that allows you to ask pointed questions about the event.

This would yield both qualitative and quantitative data you can use to improve and bring together more love birds next time around.

Best Market Research Methods for 2024

Now that you know what you’re looking for in a marketing research method, let’s dive into the best options.

Note: According to HubSpot’s 2024 State of Marketing report, understanding customers and their needs is one of the biggest challenges facing marketers today. The options we discuss are great consumer research methodologies , but they can also be used for other areas.

Primary Research

1. interviews.

Interviews are a form of primary research where you ask people specific questions about a topic or theme. They typically deliver qualitative information.

I’ve conducted many interviews for marketing purposes, but I’ve also done many for journalistic purposes, like this profile on comedian Zarna Garg . There’s no better way to gather candid, open-ended insights in my book, but that doesn’t mean they’re a cure-all.

What I like: Real-time conversations allow you to ask different questions if you’re not getting the information you need. They also push interviewees to respond quickly, which can result in more authentic answers.

What I dislike: They can be time-consuming and harder to measure (read: get quantitative data) unless you ask pointed yes or no questions.

Best for: Creating buyer personas or getting feedback on customer experience, a product, or content.

2. Focus Groups

Focus groups are similar to conducting interviews but on a larger scale.

In marketing and business, this typically means getting a small group together in a room (or Zoom), asking them questions about various topics you are researching. You record and/or observe their responses to then take action.

They are ideal for collecting long-form, open-ended feedback, and subjective opinions.

One well-known focus group you may remember was run by Domino’s Pizza in 2009 .

After poor ratings and dropping over $100 million in revenue, the brand conducted focus groups with real customers to learn where they could have done better.

It was met with comments like “worst excuse for pizza I’ve ever had” and “the crust tastes like cardboard.” But rather than running from the tough love, it took the hit and completely overhauled its recipes.

The team admitted their missteps and returned to the market with better food and a campaign detailing their “Pizza Turn Around.”

The result? The brand won a ton of praise for its willingness to take feedback, efforts to do right by its consumers, and clever campaign. But, most importantly, revenue for Domino’s rose by 14.3% over the previous year.

The brand continues to conduct focus groups and share real footage from them in its promotion:

What I like: Similar to interviewing, you can dig deeper and pivot as needed due to the real-time nature. They’re personal and detailed.

What I dislike: Once again, they can be time-consuming and make it difficult to get quantitative data. There is also a chance some participants may overshadow others.

Best for: Product research or development

Pro tip: Need help planning your focus group? Our free Market Research Kit includes a handy template to start organizing your thoughts in addition to a SWOT Analysis Template, Survey Template, Focus Group Template, Presentation Template, Five Forces Industry Analysis Template, and an instructional guide for all of them. Download yours here now.

3. Surveys or Polls

Surveys are a form of primary research where individuals are asked a collection of questions. It can take many different forms.

They could be in person, over the phone or video call, by email, via an online form, or even on social media. Questions can be also open-ended or closed to deliver qualitative or quantitative information.

A great example of a close-ended survey is HubSpot’s annual State of Marketing .

In the State of Marketing, HubSpot asks marketing professionals from around the world a series of multiple-choice questions to gather data on the state of the marketing industry and to identify trends.

The survey covers various topics related to marketing strategies, tactics, tools, and challenges that marketers face. It aims to provide benchmarks to help you make informed decisions about your marketing.

It also helps us understand where our customers’ heads are so we can better evolve our products to meet their needs.

Apple is no stranger to surveys, either.

In 2011, the tech giant launched Apple Customer Pulse , which it described as “an online community of Apple product users who provide input on a variety of subjects and issues concerning Apple.”

Screenshot of Apple’s Consumer Pulse Website from 2011.

"For example, we did a large voluntary survey of email subscribers and top readers a few years back."

While these readers gave us a long list of topics, formats, or content types they wanted to see, they sometimes engaged more with content types they didn’t select or favor as much on the surveys when we ran follow-up ‘in the wild’ tests, like A/B testing.”  

Pepsi saw similar results when it ran its iconic field experiment, “The Pepsi Challenge” for the first time in 1975.

The beverage brand set up tables at malls, beaches, and other public locations and ran a blindfolded taste test. Shoppers were given two cups of soda, one containing Pepsi, the other Coca-Cola (Pepsi’s biggest competitor). They were then asked to taste both and report which they preferred.

People overwhelmingly preferred Pepsi, and the brand has repeated the experiment multiple times over the years to the same results.

What I like: It yields qualitative and quantitative data and can make for engaging marketing content, especially in the digital age.

What I dislike: It can be very time-consuming. And, if you’re not careful, there is a high risk for scientific error.

Best for: Product testing and competitive analysis

Pro tip:  " Don’t make critical business decisions off of just one data set," advises Pamela Bump. "Use the survey, competitive intelligence, external data, or even a focus group to give you one layer of ideas or a short-list for improvements or solutions to test. Then gather your own fresh data to test in an experiment or trial and better refine your data-backed strategy."

Secondary Research

8. public domain or third-party research.

While original data is always a plus, there are plenty of external resources you can access online and even at a library when you’re limited on time or resources.

Some reputable resources you can use include:

  • Pew Research Center
  • McKinley Global Institute
  • Relevant Global or Government Organizations (i.e United Nations or NASA)

It’s also smart to turn to reputable organizations that are specific to your industry or field. For instance, if you’re a gardening or landscaping company, you may want to pull statistics from the Environmental Protection Agency (EPA).

If you’re a digital marketing agency, you could look to Google Research or HubSpot Research . (Hey, I know them!)

What I like: You can save time on gathering data and spend more time on analyzing. You can also rest assured the data is from a source you trust.

What I dislike: You may not find data specific to your needs.

Best for: Companies under a time or resource crunch, adding factual support to content

Pro tip: Fellow HubSpotter Iskiev suggests using third-party data to inspire your original research. “Sometimes, I use public third-party data for ideas and inspiration. Once I have written my survey and gotten all my ideas out, I read similar reports from other sources and usually end up with useful additions for my own research.”

9. Buy Research

If the data you need isn’t available publicly and you can’t do your own market research, you can also buy some. There are many reputable analytics companies that offer subscriptions to access their data. Statista is one of my favorites, but there’s also Euromonitor , Mintel , and BCC Research .

What I like: Same as public domain research

What I dislike: You may not find data specific to your needs. It also adds to your expenses.

Best for: Companies under a time or resource crunch or adding factual support to content

Which marketing research method should you use?

You’re not going to like my answer, but “it depends.” The best marketing research method for you will depend on your objective and data needs, but also your budget and timeline.

My advice? Aim for a mix of quantitative and qualitative data. If you can do your own original research, awesome. But if not, don’t beat yourself up. Lean into free or low-cost tools . You could do primary research for qualitative data, then tap public sources for quantitative data. Or perhaps the reverse is best for you.

Whatever your marketing research method mix, take the time to think it through and ensure you’re left with information that will truly help you achieve your goals.

Don't forget to share this post!

Related articles.

SWOT Analysis: How To Do One [With Template & Examples]

SWOT Analysis: How To Do One [With Template & Examples]

28 Tools & Resources for Conducting Market Research

28 Tools & Resources for Conducting Market Research

What is a Competitive Analysis — and How Do You Conduct One?

What is a Competitive Analysis — and How Do You Conduct One?

Market Research: A How-To Guide and Template

Market Research: A How-To Guide and Template

TAM, SAM & SOM: What Do They Mean & How Do You Calculate Them?

TAM, SAM & SOM: What Do They Mean & How Do You Calculate Them?

How to Run a Competitor Analysis [Free Guide]

How to Run a Competitor Analysis [Free Guide]

5 Challenges Marketers Face in Understanding Audiences [New Data + Market Researcher Tips]

5 Challenges Marketers Face in Understanding Audiences [New Data + Market Researcher Tips]

Causal Research: The Complete Guide

Causal Research: The Complete Guide

Total Addressable Market (TAM): What It Is & How You Can Calculate It

Total Addressable Market (TAM): What It Is & How You Can Calculate It

What Is Market Share & How Do You Calculate It?

What Is Market Share & How Do You Calculate It?

Free Guide & Templates to Help Your Market Research

Marketing software that helps you drive revenue, save time and resources, and measure and optimize your investments — all on one easy-to-use platform

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 07 August 2024

Male autism spectrum disorder is linked to brain aromatase disruption by prenatal BPA in multimodal investigations and 10HDA ameliorates the related mouse phenotype

  • Christos Symeonides   ORCID: orcid.org/0009-0009-9415-4097 1 , 2 , 3   na1 ,
  • Kristina Vacy   ORCID: orcid.org/0009-0000-5330-5260 4 , 5   na1 ,
  • Sarah Thomson   ORCID: orcid.org/0000-0002-5120-3997 4 ,
  • Sam Tanner   ORCID: orcid.org/0009-0003-9363-0756 4 ,
  • Hui Kheng Chua   ORCID: orcid.org/0000-0002-6047-4027 4 , 6 ,
  • Shilpi Dixit   ORCID: orcid.org/0000-0003-4837-0548 4 ,
  • Toby Mansell   ORCID: orcid.org/0000-0002-1282-6331 2 , 7 ,
  • Martin O’Hely   ORCID: orcid.org/0000-0002-0212-1207 2 , 8 ,
  • Boris Novakovic   ORCID: orcid.org/0000-0002-5623-9008 2 , 8 ,
  • Julie B. Herbstman 9 , 10 ,
  • Shuang Wang   ORCID: orcid.org/0000-0002-1693-6888 9 , 11 ,
  • Jia Guo   ORCID: orcid.org/0000-0002-9774-9856 9 , 11 ,
  • Jessalynn Chia 4 ,
  • Nhi Thao Tran   ORCID: orcid.org/0000-0002-0396-9760 4   nAff28 ,
  • Sang Eun Hwang   ORCID: orcid.org/0009-0009-7271-7493 4 ,
  • Kara Britt   ORCID: orcid.org/0000-0001-6069-7856 12 , 13 , 14 ,
  • Feng Chen 4 ,
  • Tae Hwan Kim   ORCID: orcid.org/0009-0000-6163-3483 4 ,
  • Christopher A. Reid 4 ,
  • Anthony El-Bitar 4 ,
  • Gabriel B. Bernasochi   ORCID: orcid.org/0000-0002-3966-2074 4 , 15 ,
  • Lea M. Durham Delbridge   ORCID: orcid.org/0000-0003-1859-0152 15 ,
  • Vincent R. Harley   ORCID: orcid.org/0000-0002-2405-1262 12 , 16 ,
  • Yann W. Yap 6 , 16 ,
  • Deborah Dewey   ORCID: orcid.org/0000-0002-1323-5832 17 ,
  • Chloe J. Love   ORCID: orcid.org/0000-0002-2024-4083 8 , 18 ,
  • David Burgner   ORCID: orcid.org/0000-0002-8304-4302 2 , 7 , 19 , 20 ,
  • Mimi L. K. Tang 2 , 15 ,
  • Peter D. Sly   ORCID: orcid.org/0000-0001-6305-2201 8 , 21 , 22 ,
  • Richard Saffery   ORCID: orcid.org/0000-0002-9510-4181 2 ,
  • Jochen F. Mueller   ORCID: orcid.org/0000-0002-0000-1973 23 ,
  • Nicole Rinehart   ORCID: orcid.org/0000-0001-6109-3958 24 ,
  • Bruce Tonge   ORCID: orcid.org/0000-0002-4236-9688 25 ,
  • Peter Vuillermin   ORCID: orcid.org/0000-0002-6580-0346 2 , 8 , 18 ,
  • the BIS Investigator Group ,
  • Anne-Louise Ponsonby   ORCID: orcid.org/0000-0002-6581-3657 2 , 3 , 4   na2 &
  • Wah Chin Boon 4 , 26   na2  

Nature Communications volume  15 , Article number:  6367 ( 2024 ) Cite this article

5665 Accesses

211 Altmetric

Metrics details

  • Autism spectrum disorders
  • Epigenetics and behaviour

Male sex, early life chemical exposure and the brain aromatase enzyme have been implicated in autism spectrum disorder (ASD). In the Barwon Infant Study birth cohort ( n  = 1074), higher prenatal maternal bisphenol A (BPA) levels are associated with higher ASD symptoms at age 2 and diagnosis at age 9 only in males with low aromatase genetic pathway activity scores. Higher prenatal BPA levels are predictive of higher cord blood methylation across the CYP19A1 brain promoter I.f region ( P  = 0.009) and aromatase gene methylation mediates ( P  = 0.01) the link between higher prenatal BPA and brain-derived neurotrophic factor methylation, with independent cohort replication. BPA suppressed aromatase expression in vitro and in vivo. Male mice exposed to mid-gestation BPA or with aromatase knockout have ASD-like behaviors with structural and functional brain changes. 10-hydroxy-2-decenoic acid (10HDA), an estrogenic fatty acid alleviated these features and reversed detrimental neurodevelopmental gene expression. Here we demonstrate that prenatal BPA exposure is associated with impaired brain aromatase function and ASD-related behaviors and brain abnormalities in males that may be reversible through postnatal 10HDA intervention.

Similar content being viewed by others

articles related to research methodology

Maternal diabetes-mediated RORA suppression in mice contributes to autism-like offspring through inhibition of aromatase

articles related to research methodology

Sex differences in the effects of prenatal bisphenol A exposure on autism-related genes and their relationships with the hippocampus functions

articles related to research methodology

Long term transcriptional and behavioral effects in mice developmentally exposed to a mixture of endocrine disruptors associated with delayed human neurodevelopment

Introduction.

Autism spectrum disorder (ASD or autism) is a clinically diagnosed neurodevelopmental condition in which an individual has impaired social communication and interaction, as well as restricted, repetitive behavior patterns 1 . The estimated prevalence of ASD is approximately 1–2% in Western countries 2 , with evidence that the incidence of ASD is increasing over time 3 . While increased incidence is partly attributable to greater awareness of ASD 4 , other factors including early life environment, genes and their interplay are important 5 . Strikingly, up to 80% of individuals diagnosed with ASD are male, suggesting sex-specific neurodevelopment underlies this condition 5 .

Brain aromatase, encoded by CYP19A1 and regulated via brain promoter I.f 6 , 7 , 8 converts neural androgens to neural estrogens 9 . During fetal development, aromatase expression within the brain is high in males 10 in the amygdala 11 , 12 . Notably, androgen disruption is implicated in the extreme male brain theory for ASD 13 , and postmortem analysis of male ASD adults show markedly reduced aromatase activity compared to age-matched controls. Furthermore, CYP19A1 aromatase expression was reduced by 38% in the postmortem male ASD prefrontal cortex 14 , as well as by 52% in neuronal cell lines derived from males with ASD 15 . Environmental factors, including exposure to endocrine-disrupting chemicals such as bisphenols, can disrupt brain aromatase function 16 , 17 , 18 .

Early life exposure to endocrine-disrupting chemicals, including bisphenols, has separately been proposed to contribute to the temporal increase in ASD prevalence 19 . Exposure to these manufactured chemicals is now widespread through their presence in plastics and epoxy linings in food and drink containers and other packaging products 20 . Although bisphenol A (BPA) has since been replaced by other bisphenols such as bisphenol S in BPA-free plastics, all bisphenols are endocrine-disrupting chemicals that can alter steroid signaling and metabolism 21 . Elevated maternal prenatal BPA levels are associated with child neurobehavioral issues 20 including ASD-related symptoms 22 , 23 , with many of these studies reporting sex-specific effects 20 , 22 , 23 , 24 . Furthermore, studies in rodents have found that prenatal BPA exposure is associated with gene dysregulation in the male hippocampus accompanied by neuronal and cognitive abnormalities in male but not female animals 20 , 23 , 24 . One potential explanation is that epigenetic programming by bisphenols increases aromatase gene methylation, leading to its reduced cellular expression 16 and a deficiency in aromatase-dependent estrogen signaling. If such is the case, it is possible that estrogen supplementation, such as with 10-hydroxy-2-decenoic acid (10HDA), a major lipid component of the royal jelly of honeybees, may be relevant as a nutritional intervention for ASD. Indeed, 10HDA is known to influence homeostasis through its intracellular effects on estrogen responsive elements that regulate downstream gene expression 25 , 26 , as well as its capacity to influence neurogenesis in vitro 27 .

Here, we have investigated whether higher prenatal BPA exposure leads to an elevated risk of ASD in males and explore aromatase as a potential underlying mechanism. We demonstrate in a preclinical (mouse) model that postnatal administration of 10HDA, an estrogenic fatty acid, can ameliorate ASD-like phenotypes in young mice prenatally exposed to BPA.

Human studies

We examined the interplay between prenatal BPA, aromatase function and sex in relation to human ASD symptoms and diagnosis in the Barwon Infant Study (BIS) birth cohort 28 . By the BIS cohort health review at 7-11 years (mean = 9.05, SD = 0.74; hereafter referred to as occurring at 9 years), 43 children had a pediatrician- or psychiatrist- confirmed diagnosis of ASD against the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5) criteria, as of the 30th of June 2023. ASD diagnosis was over-represented in boys with a 2.1:1 ratio at 9 years (29 boys and 14 girls; Supplementary Table  1 ). In BIS, the DSM-5 oriented autism spectrum problems (ASP) scale of the Child Behavior Checklist (CBCL) at age 2 years 29 predicted diagnosed autism strongly at age 4 and moderately at age 9 in receiver operating characteristic (ROC) curve analyses; area under the curve (AuC) of 0.92 (95% CI 0.82, 1.00) 30 and 0.70 (95% CI 0.60, 0.80), respectively. The median CBCL ASP score in ASD cases and non-cases at 9 years was 51 (IQR = 50, 58) and 50 (IQR = 50, 51), respectively. Only ASD cases with a pediatrician-confirmed diagnosis of ASD against the DSM-5, as verified by the 30th of June 2023, were included in this report. We thus examined both outcomes (ASP scale and ASD diagnosis) as indicators of ASD over the life course from ages 2 to 9 years (Supplementary Table  1 ). Quality control information for the measurement of BPA is presented in Supplementary Table  2 .

BPA effects on ASD symptoms at age 2 years are most evident in boys genetically predisposed to low aromatase enzyme activity

Of the 676 infants with CBCL data in the cohort sample, 249 (36.8%) had an ASP score above the median based on CBCL normative data (Supplementary Table  1 ). From a whole genome SNP array (Supplementary Methods), a CYP19A1 genetic score for aromatase enzyme activity was developed based on five single nucleotide polymorphisms (SNPs; rs12148604, rs4441215, rs11632903, rs752760, rs2445768) associated with lower estrogen levels 31 . Among 595 children with prenatal BPA and CBCL data, those in the top quartile of the genetic predisposition score, that is, children with three or more variants associated with lower levels of estrogens were classified as ‘low aromatase activity’ with the remaining classified as ‘high aromatase activity’ (Fig.  1 ). Regression analyses stratified by this genetic score and child’s sex were performed and an association between high prenatal BPA exposure (top quartile (>2.18 μg/L) and greater ASP scores was only seen in males with low aromatase activity, with a matched OR of 3.56 (95% CI 1.13, 11.22); P  = 0.03 (Supplementary Table  4 ). These findings were minimally altered following adjustment for additional potential confounders. Among males with low aromatase activity, the fraction with higher than median ASP scores attributable to high BPA exposure (the population attributable fraction) was 11.9% (95% CI 4.3%, 19.0%). These results indicate a link between low aromatase function and elevated ASP scores. A sensitivity analysis using an independent weighted CYP19A1 genetic score confirmed these findings. For the additional score, the Genotype-Tissue Expression (GTEx) portal was first used to identify the top five expression quantitative trait loci (eQTLs; rs7169770, rs1065778, rs28757202, rs12917091, rs3784307) for CYP19A1 in any tissue type that showed a consistent effect direction in brain tissue. A functional genetic score was then computed for each BIS participant by summing the number of aromatase-promoting alleles they carry across the five eQTLs, weighted by their normalized effect size (NES) in amygdala tissue. This score captures genetic contribution to cross-tissue aromatase activity with a weighting towards the amygdala, a focus in our animal studies. The score was then reversed so that higher values indicate lower aromatase activity and children in the top quartile were classified as ‘low aromatase activity’ with the remaining classified as ‘high aromatase activity’. Again, a positive association between prenatal BPA exposure and ASP scores was only seen in males with low aromatase activity, with a matched OR of 3.74 (95% CI 1.12, 12.50); P  = 0.03. Additional adjustment for individual potential confounders provided matched ORs between 3.13 to 3.85 (Supplementary Table  5 ).

figure 1

Conditional logistic regression models were run where participants were matched on ancestry and time of day of urine collection and, for ASD diagnosis at 9 years, each case within these matched groups was individually matched to eight controls based on nearest date of and age at year 9 interview. BPA was classified in quartiles with the top quartile above 2.18 μg/L as high BPA exposure vs the other three quartiles. ‘Low aromatase enzyme activity’ means being in the top quartile and ‘high aromatase enzyme activity’ means being in the lower three quartiles of an unweighted sum of the following genotypes associated with lower estrogen levels 31 (participant given 1 if genotype is present, 0 if not): CC of rs12148604, GG of rs4441215, CC of rs11632903, CC of rs752760, AA of rs2445768. ‘Greater ASD symptoms’ represents a T-score above 50 (that is, above median based on normative data) on the DSM-5-oriented autism spectrum problems scale of the Child Behavior Checklist for Ages 1.5-5 (CBCL). Data are OR ± 95% CI. Source data are provided as a Source Data file. * Since there were only two ASD cases at age 9 in the girls with low aromatase enzyme activity group, the regression model was not run.

BPA effects on ASD diagnosis at 9 years are most evident in boys genetically predisposed to low aromatase enzyme activity

In subgroup analyses where we stratified by child’s sex and unweighted CYP19A1 genetic score, the results were consistent with those found at 2 years. A positive association between high prenatal BPA exposure and ASD diagnosis was only seen in males with low aromatase activity, with a matched OR of 6.24 (95% CI 1.02, 38.26); P  = 0.05 (Supplementary Table  4 ). In this subgroup, the fraction of ASD cases attributable to high BPA exposure (the population attributable fraction) was 12.6% (95% CI 5.8%, 19.0%). In a sensitivity analysis where the weighted CYP19A1 genetic score was used, a similar effect size was observed in this subgroup; matched OR = 6.06 (95% CI 0.93, 39.43), P  = 0.06 (Supplementary Table  4 ).

Higher prenatal BPA exposure predicts higher methylation of the CYP19A1 brain promoter PI.f in human cord blood

We investigated the link between BPA and aromatase further by evaluating epigenetic regulation of the aromatase gene at birth in the same BIS cohort. CYP19A1 (in humans; Cyp19a1 in the mouse) has eleven tissue-specific untranslated first exons under the regulation of tissue-specific promoters. The brain-specific promoters are PI.f 6 , 7 , 8 and PII 17 . For a window positioned directly over the primary brain promoter PI.f, higher BPA was positively associated with average methylation, mean increase = 0.05% (95% CI 0.01%, 0.09%); P  = 0.009 (Fig.  2 ). Higher BPA levels predicted methylation across both PI.f and PII as a composite, mean increase per log 2 increase = 0.06% (95% CI 0.01%, 0.10%); P  = 0.009. Methylation of a control window, comprising the remaining upstream region of the CYP19A1 promoter and excluding both PI.f and PII brain promoters, did not associate with BPA, P  = 0.12. These findings persisted after adjustment for the CYP19A1 genetic score for aromatase enzyme activity. Thus, higher prenatal BPA exposure was associated with increased methylation of brain-specific promoters in CYP19A1 . Sex-specific differences were not observed. While these effects were identified in cord blood, methylation of CYP19A1 shows striking concordance between blood and brain tissue (Spearman’s rank correlation across the whole gene: ρ = 0.74 (95% CI 0.59, 0.84); over promoter PI.f window: ρ = 0.94 (95% CI 0.54, 0.99) 32 . Thus, prenatal BPA exposure significantly associates with disruption of the CYP19A1 brain promoter and hence likely the level of its protein product, aromatase.

figure 2

Visualized using the coMET R package. A Association of individual CpGs along the region of interest with BPA exposure, overlaid with three methylation windows: a 2 CpG window positioned directly on promoter PII, and 7 and 15 CpG windows overlapping PI.f. The red shading reflects each CpG’s level of methylation (beta value). B The CYP19A1 gene, running right to left along chromosome 15, and the positions of both brain promoters. Orange boxes indicate exons. C A correlation matrix for all CpGs in this region. Highlighted in tan are the two CpGs located within the PII promoter sequence and the single CpG located within PI.f. For the 7 CpG window over promoter PI.f, higher BPA associated positively with methylation, mean increase = 0.05% (95% CI 0.01%, 0.09%); P  = 0.009, after adjustment for relevant covariates including cell composition. The BPA-associated higher methylation of the brain promoter PI.f region remained evident when the window was expanded to 15 CpGs (mean increase = 0.06%, 95% CI [0.01%, 0.11%], P  = 0.04). For PII, the BPA-associated mean methylation increase was 0.07%, 95% CI [-0.02%, 0.16%], P  = 0.11). BPA also associated positively with methylation across both PI.f and PII as a composite, mean increase = 0.06% (95% CI 0.01%, 0.10%); P  = 0.009. For the remainder of CYP19A1 , excluding both PI.f and PII brain promoters, there was no significant association, P  = 0.12. Higher CYP19A1 brain promoter methylation leads to reduced transcription 17 . All statistical tests are two sided. Source data are provided as a Source Data file.

Replication of the association between higher BPA levels and hypermethylation of the CYP19A1 brain promoter

Previously, the Columbia Centre for Children’s Health Study-Mothers and Newborns (CCCEH-MN) cohort (Supplementary Table  3 ) found BPA increased methylation of the BDNF CREB-binding region of promoter IV both in rodent blood and brain tissue at P28 and in infant cord blood in the CCCEH-MN cohort 33 . In rodents, BDNF hypermethylation occurred concomitantly with reduced BDNF expression in the brain 33 . Re-examining the CCCEH-MN cohort, BPA level was also associated with hypermethylation of the aromatase brain promoter P1.f (adjusted mean increase 0.0040, P  = 0.0089), replicating the BIS cohort finding.

Molecular mediation of higher BPA levels and hypermethylation of BDNF through higher methylation of CYP19A1

In BIS, we aimed to reproduce these BDNF findings and extend them to investigate aromatase methylation as a potential mediator of the BPA- BDNF relationship. A link between aromatase and methylation of the BDNF CREB-binding region is plausible given that estrogen (produced by aromatase) is known to elevate brain expression of CREB 34 , 35 . In BIS, male infants exposed to BPA (categorized as greater than 4 µg/L vs. rest, following the CCCEH-MN study) had greater methylation of the BDNF CREB-binding site (adjusted mean increase = 0.0027, P  = 0.02). This was also evident overall (adjusted mean increase = 0.0023, P  = 0.006), but not for females alone (adjusted mean increase = 0.0019, P  = 0.13). We then assessed whether methylation of aromatase promoter P1.f mediates this association. In both cohorts, aromatase methylation was positively associated with BDNF CREB-binding-site methylation in males (BIS, adjusted mean increase = 0.07, P  = 0.0008; CCCEH-MN, adjusted mean increase = 0.91, P  = 0.0016). In the two overall cohorts, there was evidence that the effect of increased BPA on BDNF hypermethylation was mediated partly through higher aromatase methylation (BIS, indirect effect, P  = 0.012; CCCEH-MN, indirect effect, P  = 0.012).

Prenatal programming laboratory studies—BPA effects on cellular aromatase expression in vitro, neuronal development as well as behavioral phenotype in mice

Bpa reduces aromatase expression in human neuroblastoma sh-sy5y cell cultures.

To validate the findings of our human observational studies on BPA and aromatase expression, we began by studying the effects of BPA exposure on aromatase expression in the human neuroblastoma cell line SH-SY5Y (Fig.  3A ). Indeed, the aromatase protein levels more than halved in the presence of BPA 50 μg/L ( P  = 0.01; Fig.  3B ) by Western Blot analysis.

figure 3

A Western Blot (1 representative blot) demonstrates that increasing BPA concentrations reduced immunoblotted aromatase protein signals (green fluorescence, 55 kDa) in lysates from human-derived neuroblastoma SH-SY5Y cells. Each sample was normalized to its internal house keeping protein β-Actin (red fluorescence, 42 kDa). B Aromatase immunoblotted signals in SH-SY5Y cells treated with vehicle or BPA ( n  = 3 independent experiments/group). Five-day BPA treatment of SH-SY5Y cells leads to a significant reduction in aromatase following 50 mg/L(MD = 89, t(6) = 4.0, P  = 0.01) and 100 mg/L (MD = 85, t(6) = 3.9, P  = 0.01) BPA treatment, compared to vehicle. C BPA treatment (50 µg/kg/day) of Cyp19 -EGFP mice at E10.5-E14.5 results in fewer EGFP+ neurons in the medial amygdala (MD = -5334, t(4) = 5.9, P  = 0.004) compared to vehicle mice, n  = 3 mice per treatment. Independent t -tests were used and where there were more than two experimental groups ( B ), P -values were corrected for multiple comparisons using Holm-Sidak. All statistical tests were two-sided. Plots show mean ± SEM. Source data are in a Source Data file. Note: UT = untreated.

The effects of prenatal BPA exposure on aromatase-expressing neurons within the amygdala of male mice

There is a prominent expression of aromatase within cells of the male medial amygdala (MeA) 11 . To visualize aromatase-expressing cells, we studied genetically modified, Cyp19 -EGFP transgenic mice harboring a single copy of a bacterial artificial chromosome (BAC) encoding the coding sequence for enhanced green fluorescent protein (EGFP) inserted upstream of the ATG start codon for aromatase ( Cyp19a1 ) 11 (see Methods). As shown, EGFP (EGFP+) expression in male mice was detected as early as embryonic day (E) 11.5 (Supplementary Fig.  1 ), indicating that aromatase gene expression is detectable in early CNS development.

To study the effects of prenatal BPA exposure on brain development, pregnant dams were subject to BPA at a dose of 50 μg/kg/day via subcutaneous injection, or a vehicle injection during a mid-gestation window of E10.5 to E14.5, which coincides with amygdala development. This dose matches current USA recommendations 36 , 37 as well as the Tolerable Daily Intake (TDI) set by the European Food Safety Authority (EFSA) at the time that the mothers in our human cohort were pregnant 28 , 38 . In these experiments, we observed that prenatal BPA exposure led to a 37% reduction ( P  = 0.004) in EGFP+ neurons in the MeA of male EGFP+ mice compared to control mice (Fig.  3C ). These results are consistent with our findings in SH-SY5Y cells that indicate that BPA exposure leads to a marked reduction in the cellular expression of aromatase.

Prenatal BPA exposure at mid-gestation influences social approach behavior in male mice

Next, we evaluated post-weaning social approach behavior (postnatal (P) days P21-P24) using a modified three-chamber social interaction test 39 (Fig.  4C ). As shown, male mice with prenatal exposure to BPA were found to spend less time investigating sex- and age-matched stranger mice, when compared with vehicle-treated males (with a mean time ± SEM of 101.2 sec ± 11.47 vs. 177.3 s ± 26.97, P  = 0.0004; Fig.  4A ). Such differences were not observed for female mice prenatally exposed to BPA (Fig.  4A ). As a control for these studies, we found that the presence of the EGFP BAC transgene is not relevant to behavioral effects in the test (Supplementary Fig.  2 ), and the proportions of EGFP transgenic mice were not significantly different across BPA-exposed and vehicle-exposed cohorts.

figure 4

Sociability is the higher proportion of time spent in the stranger interaction zone compared to the empty interaction zone. In the three chamber social interaction test ( A ) BPA-exposed mice ( n  = 30, MD = 75 s, t(96) = 3.7  P  = 0.0004) spent less time investigating the stranger mouse as compared with male control ( n  = 21) mice. B Male ArKO ( n  = 8, MD = 43 s, t(30) = 2.3, P  = 0.03) mice also spent less time with the stranger compared to male WT littermates ( n  = 9). C A schematic of the 3-Chamber Sociability Trial. Created with BioRender.com. D Male BPA-exposed mice ( n  = 12, MD = 8.2, U = 11, P  = 0.048.) spent more time grooming compared to control ( n  = 5) mice. There were no differences between female BPA-exposed ( n  = 9) and female control ( n  = 6) mice. Independent t -tests were used P -values were corrected for multiple comparisons using Holm-Sidak. For ( C ), a Mann–Whitney U test was used. All statistical tests were two-sided. Plots show mean ± SEM. Source data are in a Source Data file. Note: Veh = vehicle.

To determine if the effects of prenatal BPA exposure were developmentally restricted, we delivered subcutaneous injections (50 µg/kg/day) of BPA to pregnant dams at early (E0.5–E9.5), mid (E10.5–E14.5), and late (E15.5–E20.5) stages of gestation. From these experiments, we found that while male pups exposed to BPA in mid-gestation developed a social approach deficit, such behavioral impairments were not observed for early or late gestation BPA exposure (Supplementary Fig.  3 ). In addition, we performed experiments in which BPA was available to dams by voluntary, oral administration (50 µg/kg/day) during mid-gestation. As shown, a social approach deficit was again observed in male mice (Supplementary Fig.  4 ), consistent with results from prenatal (mid-gestation) BPA exposure by subcutaneous injections. Thus, we find that prenatal BPA exposure at mid-gestation (E10.5-E14.5) in mice leads to reduced social approach behavior in male, but not female offspring. Notably, the amygdala of embryonic mice undergoes significant development during mid-gestation 40 .

Aromatase knockout (ArKO) male mice have reduced social behavior

Having demonstrated that prenatal BPA exposure reduces aromatase expression in SH-SY5Y cells and affects the postnatal behavior of mice, we next asked if the aromatase gene ( Cyp19a1 ) is central to these phenotypes. To address this, we performed social approach behavioral testing (Supplementary Fig.  5 ) on aromatase knockout (ArKO) mice 41 which have undetectable aromatase expression. The social preference towards the stranger interaction zone compared to the empty zone was only evident for the wildtype ( P  = 0.003 Fig.  4B ) but not the ArKO ( P  = 0.45 Fig.  4B ). This male-specific social interaction deficit is similar to the BPA exposed pups. Further, postnatal estrogen replacement could reverse the ArKO reduction in sociability seen in males ( P  = 0.03 Supplementary Fig.  5 ) resulting in a similar stranger-to-empty preference in the E2-treated ArKO as observed for wildtype. The female ArKO pups did not have a sociability deficit (Supplementary Fig.  5 ).

Further, we did not observe any behavioral differences between ArKO vs WT (or BPA exposed vs unexposed) mice of both sexes in Y-maze test. All groups were able to distinguish the novel arm from the familiar arm. All groups spent significantly more time in the novel arm compared to the familiar arm (Supplementary Fig.  6 ), excluding major short-term memory, motor and sensory intergroup difference contributions.

Prenatal exposure to BPA affects repetitive behavior in male mice

Using the water squirt test, we have previously reported that male ArKO, but not female ArKO mice displayed excessive grooming, a form of repetitive behavior, compared to WT mice 42 . Thus, we conducted the water squirt test on BPA-exposed mice to find that male but not female mice exhibited excessive grooming behavior ( P = 0.048; Fig. 4D). Thus, male prenatal BPA-exposed mice and ArKO mice, but not females, exhibited such repetitive behaviors compared to control mice.

The development of the MeA is altered in male ArKO mice as well as in prenatal BPA-exposed male mice

The development and function of the amygdala are highly relevant to human brain development and ASD 43 , 44 . Notably, the medial amygdala (MeA) is central to emotional processing 45 , and this tissue is a significant source of aromatase-expressing neurons. Given that aromatase function in the amygdala is significant for human cognition 46 and behavior 12 , 47 , and that aromatase is highly expressed in the mammalian MeA, as particularly observed in male mice 11 , we investigated changes to the structure and function of this brain region. We performed stereology analyses on cresyl violet (Nissl)-stained sections of male MeA, we observed a 13.5% reduction in neuron (defined by morphology, size, and presence of nucleolus) number. Compared to the vehicle-exposed males, BPA-exposed males had significantly reduced total neuron number (mean count of 91,017 ± SEM of 2728 neurons vs 78,750 ± SEM of 3322 neurons, P  = 0.046; Supplementary Fig.  7 ).

We further examined the characteristics of cells within this amygdala structure in detail using Golgi staining (Fig.  5A, B ). We found that the apical and basal dendrites in the MeA were significantly shorter in male BPA-exposed mice vs. vehicle-treated mice (apical: 29.6% reduction, P  < 0.0001; basal, P  < 0.0001). This phenotype was also observed for male ArKO vs. WT mouse brains (apical 35.0% reduction, P  < 0.0001; basal 31.9% reduction, P  < 0.0001; Fig.  5A ). Dendritic spine densities of apical and basal dendrites of male ArKO mice, as well as male mice exposed to BPA, were also significantly reduced (KO vs WT apical, P  = 0.01; KO vs WT basal, P  < 0.0001; BPA treated vs vehicle treated apical P  < 0.0001; BPA treated vs vehicle treated basal, P  = 0.004; Fig.  5B ). The dendritic lengths (Fig.  5A ) and spine densities (Fig.  5B ) for apical and basal neurites within the MeA of female ArKO mice or BPA-exposed mice were not significantly different compared to control. Thus, in the context of aromatase suppression by prenatal BPA-exposure, or in ArKO mice lacking aromatase, we find that the apical and basal dendrite features within the MeA are affected in a sexually dimorphic manner.

figure 5

A Golgi staining showed shorter apical and basal dendrites in male BPA-exposed (apical: n  = 27β = −136 μ m, 95% CI [−189, −83], P  = 6.0 × 10 −7 ; basal: n  = 27 neurons β =-106, 95% CI [−147, −64], P  = 5.1 × 10 −7 ) and ArKO mice (apical: n  = 27 β = −194, 95% CI [−258, −130], P  = 2.9 × 10 −9 ; basal: n  = 27, β = −121, 95% CI [−143, −100], P  = 1.2 × 10 −29 ) compared to male vehicle (apical n  = 27, basal n  = 27) or WT (apical n  = 27, basal n  = 29). Female BPA-exposed mice had longer basal dendrites vs. vehicle ( n  = 26 neurons/group, β = 133, 95% CI [76, 191], P  = 5.5 × 10 −6 ), while female ArKO mice had shorter basal dendrites vs. WT ( n  = 22 neurons/group, β = −45, 95% CI [−91, −0.2], P  = 0.049). Significant sex-by-BPA-treatment interaction effects were observed for apical ( P  = 0.0002) and basal ( P  = 3.0 × 10 −11 ) dendritic lengths, and a sex-by-genotype interaction for basal length ( P  = 0.003) but not apical length ( P  = 0.19). B Golgi staining showed male BPA-exposed (apical: n  = 39, β = −7.0, 95% CI [−10.1, −4.0], P  = 5.5 × 10 −6 ; basal: n  = 90, β = −3.4, 95% CI [−5.7, −1.1], P  = 0.004) and ArKO (apical: n  = 51, β = −6.8, 95% CI [−12.2, −1.3], P  = 0.01; basal: n  = 97, β = −3.8, 95% CI [−5.4, −2.2], P  = 5.2 × 10 −6 ) mice had lower spine densities on apical and basal dendrites vs. vehicle (apical n  = 46, basal n  = 106) or WT (apical n  = 53, basal n  = 109) mice. Female mice exhibited no spine density differences for BPA exposure (apical n  = 74, basal n  = 106) vs. vehicle (apical n  = 61, basal n  = 103) and ArKO (apical n  = 94, basal n  = 86) vs. WT (apical n  = 88, basal n  = 83). There was a significant sex-by-BPA-treatment interaction for apical spine density ( P  = 0.0005) but not basal ( P  = 0.99), and no significant sex-by-genotype interactions (apical: P  = 0.08; basal: P  = 0.19). For golgi staining experiments, 3 mice/group with 6–9 neuron measures/mouse. Spine count datapoints represent the number of spines on a single 10μm concentric circle. C c-Fos fluorescent immunostaining in adult male PD-MeA revealed fewer c-Fos+ve cells in BPA-exposed ( n  = 3) vs. vehicle mice ( n  = 3; mean difference MD = 3687, t (4) = 16.12, P  < 0.0001) and ArKO ( n  = 4) vs. WT mice ( n  = 4; MD = −10237; t (4) = 6.48, P  = 0.0002). Early postnatal estradiol restored ArKO c-Fos to WT levels ( n  = 4; MD = −3112; t(4) = 1.97, P  = 0.08). D Microelectrode array electrophysiology showed a lower rate of change in EPSP over 1-4 volts in male BPA-exposed mice ( n  = 5 mice, n  = 11 slices) vs. vehicle ( n  = 7 mice, n = 12 slices; P  = 0.02). Generalized estimating equations were used clustering by mouse ( A , B ) or voltage input ( D ) and assuming an exchangeable correlation structure. For ( C ), independent t -tests were used and where there were more than two experimental groups (ArKO analysis), P -values were corrected for multiple comparisons using Holm–Sidak. All statistical tests were two-sided. Plots show mean ± SEM. Source data are in a Source Data file.

Prenatal BPA exposure or loss of aromatase in ArKO male mice leads to amygdala hypoactivation and alters behavioral response to a novel social stimulus

The amygdala, a social processing brain region, is hyporesponsive in ASD (see review ref. 48 ). A post-mortem stereology study reported that adolescents and adults diagnosed with ASD feature an ~15% decrease in the numbers of neurons within the amygdala 43 . Also, functional MRI studies report amygdala hypoactivation in participants with ASD compared to controls 49 . Given that the amygdala is a significant source of aromatase-expressing neurons, we next conducted a series of studies to explore how aromatase deficiency influences the male mouse amygdala, using a combination of c-Fos immunohistochemistry, Golgi staining of brain sections, as well as electrophysiological analyses.

To investigate amygdala activation responses after interacting with a stranger mouse, we performed c-Fos immunohistochemistry (a marker for neuronal activation 50 ; Supplementary Fig.  8 ). As shown, prenatal BPA-exposed mice featured 58% fewer c-Fos positive neurons than in the amygdala of vehicle-exposed mice brains ( P  < 0.0001; Fig.  5C ). Similarly, we found that the MeA of ArKO mice had a marked deficit of 67% c-Fos-positive neurons when compared with WT ( P  = 0.0002) mice, which was ameliorated by early postnatal estradiol replacement (Fig.  5C ). Therefore, prenatal BPA exposure or loss of aromatase expression in ArKO mice leads to amygdala hypoactivation.

Next, we measured the synaptic excitability (I/O curve) of the MeA using multiple electrode analysis, with excitatory postsynaptic potential (EPSP) output indicative of electrical firing by local neurons. As shown, compared to corresponding controls, we find that MeA excitability (I/O curve) is significantly reduced in male mice prenatally exposed to BPA as well as in male ArKO mice (Figs.  5 D and 9D ). As shown, at 4-volt input, BPA treatment resulted in a 22.8% lower ( P  = 0.02) excitatory EPSP output than the vehicle treatment, while a 21% reduction ( P  = 0.03) in signal was observed for male ArKO mice compared to male WT mice. Thus, prenatal BPA exposure leads to functional hypoactivation of the amygdala of male mice, and this pattern is also evident in male ArKO mice.

Prenatal BPA exposure or loss of aromatase in ArKO male mice leads to abnormalities in neuronal cortical layer V as well as brain function

It has been reported that individuals with ASD show distinct anatomical changes within the somatosensory cortex, including in neurons of cortical layer V 51 . We previously reported that layer V within the somatosensory cortex is disrupted in ArKO mice 52 . Thus, we performed Golgi staining to study the apical and basal dendrites of neurons within layer V of the somatosensory cortex following prenatal BPA exposure, as well as in ArKO mice. As shown, we found that apical and basal dendrite lengths of layer V cortical neurons were significantly decreased in male mice prenatally exposed to BPA, compared with vehicle-treated mice (apical P  = 0.04; basal P  < 0.0001, Fig.  6A ). Such reductions in dendrites were also reported in male ArKO vs. WT mice (apical P  < 0.0001; basal P  = 0.02; Fig.  6A ). Furthermore, we found that dendritic spine densities on apical dendrites were also reduced (BPA-exposed mice vs. vehicle, P  = 0.04; ArKO vs. WT mice, P  = 0.01 (Fig.  6B ).

figure 6

A Golgi staining showed shorter apical and basal dendrites in male BPA-exposed (apical: n  = 36, β =-350μm, 95% CI [−679, −20], P  = 0.04; basal: n  = 36, β = −217, 95% CI [−315, −119], P  = 1.4 × 10 −5 ) and ArKO mice (apical: n  = 35, β = −541.9, 95% CI [−666, −417], P = 1.3 × 10 −17 ; basal: n  = 35, β = −163, 95% CI [−308, −17], P  = 0.02) compared to male vehicle (apical n  = 35, basal n  = 36) or WT(apical n  = 36, basal n  = 36). B Golgi staining showed male BPA-exposed (apical: n  = 186, β = −4.7, 95% CI [−9.2, −0.2], P  = 0.04; basal: n  = 40 β = −6.7, 95% CI [−16, 2.8], P  = 0.17) and ArKO (apical: n  = 148 β = −4.4, 95% CI [−7.7, −1.0], P  = 0.01; basal: n  = 51 β = −5.2, 95% CI [−14.4, 4.1], P  = 0.27) mice had lower spine densities on apical but not on basal dendrites vs. vehicle (apical n  = 189, basal n  = 56) or WT mice (apical n  = 185, basal n  = 55). For golgi staining experiments, 3 mice/group with 9–12 neuron measures/mouse. Spine count datapoints represents the number of spines on a single 10 μm concentric circle. C Representative photomicrographs of golgi stained cortical neurons, scale bar is 100μm. D Electrocorticograms (ECoG) revealed an increased in the average spectral power at 8 Hz in BPA-exposed ( n  = 4; * a 8 Hz MD = −0.5; t(325) = 3.4  P  = 0.01) mice and (E) 4–6 Hz in ArKO mice ( n  = 4; * b 4 Hz MD = −0.2; t(120) = 4.3, P  = 0.0006; * c 5 Hz MD = −0.2, t (120) = 6.1, P  < 0.0001; * d 6 Hz MD = −0.2, t (120) = 5.2  P  < 0.0001) vs. vehicle ( n  = 7) or WT ( n  = 4)mice. Generalized estimating equations were used clustering by mouse (Panels A, B) and assuming an exchangeable correlation structure. For ( D ) and(E) Independent t test were used used, P -values were corrected for multiple comparisons using Holm-Sidak. All statistical tests were two-sided. Plots show mean ± SEM. Source data are in a Source Data file.

To explore the effects of reduced aromatase on cortical activity, we performed electrocorticography (ECoG) recordings from mice in both experimental models (Fig.  6C ). As shown, spectral analysis revealed an increased power in the range of 4–6 Hz for ArKO vs. WT mice (4 Hz  P  = 0.0006, 5 Hz  P  < 0.0001, 6 Hz  P  < 0.0001; Fig.  6D ) and at 8 Hz for BPA-exposed vs. vehicle mice ( P  = 0.01; Fig.  6C ). These data indicate that BPA-exposure or loss of aromatase in ArKO mice affects cortical activity, a result which is reminiscent of cortical dysfunction evidenced by EEG recordings on human participants diagnosed with ASD 53 .

Molecular docking simulations indicate 10HDA is acting as a ligand at the same site as BPA on Estrogen Receptors α and β

It has been reported that BPA interferes with estrogen signaling through its competitive interaction and binding with estrogen receptors α (ERα) and β (ERβ) 54 . To explore this in the context of our findings, we used high-resolution in silico 3D molecular docking simulations to model the binding affinity of the natural ligand 17β-estradiol, the putative ligand BPA, as well as a putative therapeutic ligand of interest 10HDA with ERα (Protein Data Bank (PDB) ID: 5KRI) and ERβ (PDB ID: 1YYE). As shown, our spatial analysis indicated that all three ligands have robust binding affinity (Fig.  7 ; Supplementary Movie  1 ). However, while docking alignment revealed that the predicted fit for 10HDA is strikingly similar to that of 17β-estradiol 25 , 55 , BPA showed a greater mismatch (Fig.  7D ), consistent with previous reports that BPA is 1000-fold less estrogenic than the native ligand 56 . Thus, at least for ERα and Erβ, we find that 10HDA may be effective as a competitive ligand that could counteract the effects of BPA on estrogen signaling within cells.

figure 7

In silico molecular docking analysis of estrogen receptor β (ERβ, Protein Data Bank (PDB) ID: 1YYE; encoded by the ES gene) using the DockThor platform, showing binding predictions for ( A ) the native ligand 17β-estradiol (E2), ( B ) bisphenol A (BPA), ( C ) Trans-10-hydroxy-2-decenoic acid (10HDA), and ( D ) E2 and BPA (left) and E2 and 10HDA (right) superimposed for spatial alignment comparison. While the molecular affinities of BPA and 10HDA for Erβ were comparable (−9.2 vs. −7.9, respectively), 10HDA aligns better with the binding conformation of the endogenous ligand E2, which activates the receptor. BPA is previously reported as sub-optimally estrogenic 106 — >1000-fold less compared to natural estradiol 54 , 56 —whereas 10HDA has an estrogenic role in nature 25 , 55 . Thus, 10HDA may compensate for E2 deficiency caused by a reduction in aromatase enzyme, and in competition with binding by BPA. Please see Supplementary Movie 1 for a video of the above molecular docking of ERβ with E2 superimposed with BPA and Supplementary Movie 2 for the above molecular docking of ERβ with E2 superimposed with 10HDA.

In vitro effects of BPA and 10HDA in primary fetal cortical cell cultures from male brains

Examining male fetal primary cortical culture, BPA alone shortened neurite lengths (Fig.  8A , BPA; quantified in Fig.  8B-C ) and decreased the spine density. BPA treatment reduced both neurite length ( P  = 0.0004) and spine densities ( P  < 0.0001; Fig.  8A , BPA; quantified in Fig.  8B–C ). Co-administration with 10HDA ameliorated these adverse effects of BPA (Fig.  8A , 10HDA + BPA; quantified in Fig.  8B, C ).

figure 8

A Representative photomicrographs of primary cultures of embryonic (ED15.5) mouse cortical neurons, red staining is βIII tubulin and green is aromatase. Scale bar is 100μm. B Compared to the BPA group, the vehicle group (β = 79.9, 95% CI [36, 124], P  = 0.0004) and BPA + 10HDA group (β = 174, 95% CI [102, 247], P  = 2.4 × 10 −7 ) have significantly longer neurites. The BPA + 10HDA group (β = 94, 95% CI [8, 180], P  = 0.03) has longer neurites compared to the vehicle group, and there is no difference between the vehicle and 10HDA groups. C Compared to the BPA group, the vehicle group(β = 16, 95% CI [11, 21], P  = 1.7 × 10 −8 ) and BPA + 10HDA group (β = 30, 95% CI [21, 40], P  = 8.4 × 10 −10 ) have significantly higher spine densities. The BPA + 10HDA group (β = 14, 95% CI [4, 24], P  = 0.006) has a higher spine density compared to the vehicle group, and there is no difference between the vehicle and 10HDA groups. n  = 10 neurons/group. Primary cortical cell culture was obtained from 12 male mouse embryoes. Spine count datapoints represent the number of spines on a single 10μm concentric circle. Generalized estimating equations were used clustering by mouse and assuming an exchangeable correlation structure. All statistical tests were two-sided. Plots show mean ± SEM. Source data are in a Source Data file.

In vivo effects of 10HDA on BPA mouse model

Guided by our findings in cultured neurons, we next investigated the effects of postnatal 10HDA administration on mice prenatally exposed to BPA at mid-gestation, as follows. After weaning, pups (six litters, 3 weeks of age) were administered daily injections of 10HDA (0 and 500 μg/kg/day; dissolved in saline, i.p.) for 3 weeks, following which pups were assessed for behavioral phenotypes. Strikingly, 10HDA treatment significantly improved social interaction (Fig.  9A ). To determine whether the effect of 10HDA administration is permanent, all treatments were withdrawn for 3 months, and mouse behaviors were subsequently re-tested. Withdrawal of 10HDA treatment in BPA-exposed male mice resulted in a deficit in social interaction (Fig.  9B ), and this deficit was once again ameliorated by a subsequent 10HDA treatment (Fig.  9C ) at 5 months of age, in adulthood. Taken together, these data demonstrate that continuous, postnatal 10HDA administration is effective for ameliorating social interaction deficits in male mice following prenatal BPA exposure.

figure 9

A 10HDA treatments increased social approach in males ( n  = 10/group, MD = 41.14, U  = 11, P  = 0.03,) but not females ( n  = 8/group), compared to saline controls. After 3 months of treatment withdrawal ( B ), male mice (saline n  = 8, 10HDA n  = 7) no longer spent more time interacting with strangers, compared to vehicle treatment. C When male mice ( n  = 8/group) were subsequently treated with a second round of 10HDA, social approach behavior was once again significantly elevated (MD = 39.2, U  = 5, P  = 0.003), indicative of a rescue of this behavioral effect. D Compared to the WT Saline ( n  = 10) group (β = 57.1, 95% CI [47.4, 66.8]), EPSP increases at a 21% lower rate with increasing input in the ArKO Saline ( n  = 14) group (β = 45.1 μV, 95% CI [40.1, 50.1], P  = 0.03). No differences in slope were detected when comparing the WT Saline group with each of the other two treatment (WT n  = 18, KO n  = 12) groups. Mann–Whitney U tests were used and for ( D ), Generalized estimating equations were used clustering by voltage input ( D ) and assuming an exchangeable correlation structure. All statistical tests were two-sided. Plots show mean ± SEM. Source data are in a Source Data file. Note: Sal = saline, w/d = withdrawal.

Next, we wanted to determine if hypoactivity arising from the absence of aromatase in the amygdala may be influenced by 10HDA. To address this question, we studied ArKO mice using multiple-electrode analyses, following 3 weeks of treatment with 10HDA (500 μg/kg/day, i.p.). As shown in Fig, 9D , the electrical activity of the male ArKO amygdala treated with 10HDA was similar to male WT activity levels, whereas saline-treated male ArKO amygdala showed significantly lower activity ( P  = 0.03) when stimulated by an input/output paradigm, suggesting that 10HDA treatment was effective to compensate the absence of aromatase. Therefore, we interpret these results to suggest that 10HDA restores signaling deficits arising from aromatase deficiency. Given that prenatal BPA exposure suppresses aromatase, 10HDA supplementation may be relevant to aromatase-dependent signaling in that context as well.

Transcriptomic studies of the fetal brain cortex and cortical cell cultures

MiSeq Next-Gen Sequencing was performed on the transcriptome libraries generated from the brain cortex of the E16.5 fetuses after maternal mid-gestation BPA or vehicle exposure. The action of 10HDA was analyzed by RNAseq of transcriptome libraries from total RNA extracted from primary mouse fetal cortical cultures treated with vehicle or 10HDA. Firstly, pathway analysis of the RNAseq data was performed for Gene Ontology (GO) categories using the clusterProfileR R package. No individual pathways in the BPA analysis survived correction for multiple comparisons using an agnostic (non-candidate) approach. Further candidate investigation using the binomial test showed a significant inverse effect of BPA and 10HDA on pathways previously linked to autism 57 , with 10HDA treatment counteracting the effects of BPA on these pathways (Supplementary Fig.  9 ). Based on our Golgi staining experimental findings relating to altered dendrite morphology, we further assessed the category “dendrite extension” as a candidate pathway. Genes in this pathway were downregulated by BPA (Supplementary Figs.  10 A, 9A) and upregulated by 10HDA (Supplementary Figs.  9B , 10B ). More broadly, Fisher’s exact test showed a significant BPA-associated down-regulation ( P  = 0.01), and 10HDA-associated up-regulation ( P  = 0.0001), of pathways with the terms “axon” and “dendrite”. Notably, the majority (82%; 9 of the top 11 available) mid gestational biological processes whose activity is overrepresented in induced pluripotent stem cells of autism cases vs. controls 57 were impacted by BPA, and in the opposite direction to 10HDA ( P  = 0.03; Supplementary Fig.  9 ).

Next, we performed pathway enrichment analysis, also using a candidate pathway approach, of the RNAseq data using Ingenuity (Fig.  10 ). Strikingly, the effects of BPA and 10HDA on gene expression were diametrically opposed across many functional domains (Fig.  10 ). For example, the canonical pathways “Synaptogenesis Signaling pathway” and “CREB signaling” were downregulated by BPA but upregulated by 10HDA. Similarly, key brain functions, e.g., growth of neurites and neural development were down regulated by BPA and reciprocally upregulated by 10HDA (Fig.  10 ). Taken together, prenatal BPA exposure is detrimental to gene expression through a mechanism that may be ameliorated by postnatal 10HDA administration. The full list of differentially expressed genes can be found in Supplementary Dataset  1 .

figure 10

The Canonical pathways and Disease and Function—Brain pathway databases were selected in Ingenuity. Several key signaling pathways and brain functions were downregulated ( Z -score less than zero) by BPA and also upregulated ( Z -score greater than zero) by 10HDA. 10HDA downregulated four brain disorder related pathways—hyperactive behavior, seizures, seizure disorder and behavioral deficit. Colored boxes indicate significant ( P  < 0.05, Fisher’s Exact Test with Benjamani-hochberg) changes in z -score. Boxes shaded in gray indicate non-significant gene expression changes ( P  = 0.05 or greater). Source data are provided as a Source Data file.

Here, we report that prenatal BPA exposure leads to ASD endophenotypes in males, and that this involves the actions of the aromatase gene, as well as its functions in brain cells. Our multimodal approach, incorporating both human observational studies and preclinical studies with two mouse models, offer significant insight into how prenatal programming by BPA disrupts aromatase signaling to cause anatomical, neurological, as well as behavioral changes reminiscent of ASD in males.

In the Barwon Infant Study (BIS) human birth cohort the adverse effect of high prenatal BPA exposure on ASD symptoms (ASP score) at age 2, and clinical ASD diagnosis at age 9, was particularly evident among males with a low aromatase enzyme activity genetic score. Further, studying cord blood gene methylation as an outcome we have demonstrated that BPA exposure specifically methylated the offspring CYP19A1 brain promoter in the BIS cohort, replicated in the CCCEH-MN cohort. Previously, in a meta-analysis of two epigenome-wide association studies (EWAS), two CpGs in the region around the brain promoters PI.f and PII were significantly associated ( P  < 0.05) with ASD in both EWAS 58 . Past work 16 , and our findings of BPA reduced aromatase expression in a neuronal cell culture and reduced aromatase-eGFP expression in mouse brain, are consistent with the brain-specific suppression of aromatase expression. We also demonstrated that BPA exposure led to a reduction in steady-state levels of aromatase in a neuronal cell line. Further, we replicated past work that higher prenatal BPA levels are associated with BDNF hypermethylation 33 , previously demonstrated to be associated with lower BDNF expression in males 33 .

We find that prenatal BPA exposure at mid-gestation in mice induces ASD-like behaviors in male but not female offspring, concomitant with cellular, anatomical, functional, and behavioral changes (Supplementary Fig.  11 and Supplementary Fig.  12 ). We found that these features were also observed in male ArKO mice, and this is important because aromatase expression is disrupted in BPA-treated male mice. In the Y-maze test, both the ArKO and BPA offspring did not show any differences with the respective control indicating that there are no major memory, sensory or motor issues in these animals. Given the distinct parallels between the effects of BPA that suppresses aromatase, as well as ArKO mice that lack aromatase, we surmise from our studies that BPA disrupts aromatase function to influence the male mouse brain which manifests as: (i) reduced excitatory postsynaptic potentials in the amygdala, (ii) reduced neuron numbers as well as dendritic lengths and spine densities for neurons within the MeA, (iii) altered cortical activity as recorded by ECoG concomitant with decreased dendritic length and spine density in layer IV/V somatosensory cortical neurons, as well as (iv) enhanced repetitive behaviors and reduced social approach to a stranger. These results in mice are consistent with studies with human participants that report abnormal neuronal structure in these comparative regions within the brains of individuals with ASD 51 . Furthermore, we investigated our gene expression dataset and found that the majority of the top biological processes over-represented in cells derived from ASD cases compared to non-cases in a human pluripotent stem cell analysis with a focus on mid gestational brain development 57 were impacted in opposite directions by BPA compared to 10HDA in our gene expression studies on the male mouse brain (Supplementary Fig.  9 ). Of note, the sexually dimorphic effect we report is consistent with work of others demonstrating that prenatal BPA exposure of rodents led to dysregulation of ASD-related genes with neuronal abnormalities, and learning and memory problems only in males 59 .

In our investigations of BPA, we recognized that 10HDA may be a suitable compound as a ligand in the context of brain ER signaling 27 because of its positive effect on gene expression through stimulation of estrogen responsive DNA elements 25 and its role in neurogenesis 27 —characteristics that altogether may compensate for a relative lack of aromatase-generated neural estrogens. Administration of 10HDA alongside BPA protected neuronal cells in culture from the adverse sequelae observed for BPA alone at the same dose. Three weeks of daily postnatal 10HDA treatment significantly enhanced the sociability of the male BPA-exposed mice and dendrite morphology in primary cell culture. The adverse decrease in dendrite lengths and spine densities of the BPA-exposed mice was also corrected by 10HDA administration (Fig.  8 ). Furthermore, postnatal 10HDA treatment restored amygdala electrical activity in the ArKO mice, indicating that 10HDA likely acts downstream of, rather than directly upon, the aromatase enzyme, given that ArKO mice lack functional aromatase. Transcriptomic analyses revealed that 10HDA upregulated, whereas BPA downregulated, gene expression for fetal programming such as for synaptogenesis and growth of neurites. Some of these pathways could be activated by factors downstream of aromatase, such as 17β-estradiol (Supplementary Fig.  13 ). In this study, the ArKO model was useful because it provided an estrogen deficient comparison 41 . We were able to demonstrate that early postnatal E2 administration restored both MeA neural activation and social preference behavior in the ArKO males.

The molecular docking simulations indicate that ERα and ERβ both comprise docking sites for 10HDA and BPA, however, 10HDA is strongly estrogenic 25 , 55 while BPA is greater than 1000-fold less potent than natural estrogen 54 . Such differences in binding are likely relevant to the diverse transcriptomic effects observed in the cells we analyzed by RNAseq.

Strengths of this study include the multimodal approach to test the hypothesis of the interplay of BPA, male sex, and aromatase suppression. In our human epidemiological studies, extensive information was available to allow confounding to be accounted for using matched analyses for the BPA-ASD cohort finding, and findings persisted after adjustment for further individual confounders. Using a modern causal inference technique, molecular mediation 60 we demonstrate in both birth cohorts that aromatase gene promoter I.f methylation underlies the known effect of higher prenatal BPA on BDNF hypermethylation. Other key features that support an underlying causal relationship include: the consistency of the findings across studies in this program (Supplementary Fig.  11 ); and the consistency with which our experimental laboratory work maps to prior studies of people with ASD (summarized in Supplementary Fig.  12 ) in relation to neuronal and structural abnormality in the amygdala 43 and abnormality in amygdala connectivity 44 , and resting-state cortical EEG 53 . Our findings are also consistent with past work indicating reduced prefrontal aromatase levels in individuals with ASD at postmortem 14 , 15 . The finding that BPA-associated gene methylation patterns in the BIS cohort were not sex-specific but that BPA-associated ASD symptoms and clinical diagnosis were more evident in males with a low genetic aromatase score would be consistent with the male vulnerability to BPA reflecting not differential epigenetic programming, but a greater vulnerability to reduced aromatase function in the developing male brain. This is reinforced by the ArKO model which resulted in an ASD-like phenotype in males not females. We have provided experimental evidence not only on the adverse neurodevelopment effects of BPA, but also experimental evidence of the alleviation of the behavioral, neurophysiological, and neuroanatomical defects following postnatal treatment with 10HDA. A human randomized controlled prevention trial that achieved bisphenol A elimination during pregnancy, with a resultant reduction in ASD among male offspring, would be a useful next step to provide further causal evidence of BPA risks but the feasibility and ethics of such an undertaking would be considerable. We demonstrate that postnatal administration of 10HDA may be a potential therapeutic agent that counteracts the detrimental impacts on distinct gene expression signatures directly impacted by prenatal BPA exposure. Furthermore, 10HDA may ameliorate deficits in ArKO mice which further suggests its utility as a replacement therapy for aromatase deficiency.

Two limitations of our human study were that BPA exposure was measured in only one maternal urine sample at 36 weeks, and that the assay may have low sensitivity 61 . We partially redressed the latter by focusing on categorical BPA values, as recommended 61 , and undertook a matched ASD analysis where determinants for BPA variation, such as the urine collection time of day, were matched to reduce misclassification. Also, functional gene expression studies were unavailable for human samples in our study, but whilst the misclassification introduced by a reliance on a SNP based score would likely lead to an underestimation, an effect was still found among males with a low genetic aromatase score. It would be useful in further studies to consider altered aromatase function with a combined epigenetic-genetic score to reflect environment-by-epigenetic and genetic determinants of low aromatase function. Direct brain EWAS measures were not available, but for the key brain promoter PI.f region of CYP19A1 , the brain-blood correlation is very high: Spearman’s rho= 0.94, 95% CI [0.80, 0.98] 32 . Although ASD symptoms (ASP score) at 2 years were based on parent report, we have previously reported that a higher ASP score was predictive of later ASD diagnosis by age 4 30 . ASD diagnosis at age 9 was verified to meet DSM-5 criteria by pediatrician audit of medical records, thereby reducing diagnostic misclassification.

In our preclinical studies, we performed the 10HDA studies and some of the BPA mechanistic studies only on male animals because our extensive laboratory and human studies, with more than 25 analyses, demonstrated that BPA exposure had significantly more adverse effects in males than females. In addition, we performed the RNASeq on the cortex of fetal mice exposed to BPA in vivo, whereas 10HDA was performed in vitro on cortical primary cell culture. This would likely increase the variability between the BPA RNASeq and the 10HDA RNASeq, yet many of the same pathways were impacted but in opposing directions. Furthermore, the changes we observed in our RNAseq data could be due to changes in the cell type or cell state. This could be clarified by future single cell RNAseq experiments now that this specific issue has been identified.

The BPA exposure of mouse dams under our experimental conditions (50 µg/kg bodyweight) matches the current Oral Reference Dose set by the United States Environmental Protection Agencyc, the current safe level set by the U.S. Food and Drug Administration (FDA) 37 , as well as the Tolerable Daily Intake set by the European Food Safety Authority 38 at the time that the mothers in our human cohort were pregnant 28 . The EFSA set a new temporary TDI of 4 µg/kg bodyweight in 2015 62 and, in December 2021, recommended further reducing this by five orders of magnitude to 0.04 ng/kg 63 ; although this was subsequently revised to 0.2 ng/kg in EFSA's scientific opinion published in 2023 64 . Therefore, the timing of this new evidence is particularly pertinent and provides direct human data to support the reduced TDI.

Consistent with typical human exposure in other settings 20 , BPA exposure in our birth cohort was substantially lower than the above, and yet we see adverse effects. Assuming fractional excretion of 1 65 and average daily urine output of 1.6 L 65 , the median urinary bisphenol concentration of 0.68 µg/L—for which we see increased odds of ASD diagnosis—equates to a total daily intake of just 13 ng/kg, given a mean maternal bodyweight of 80.1 kg at time of urine collection. Notably, while we find an adverse association at 13 ng/kg, we do not have sufficient participants with lower exposure to evaluate a safe lower limit of exposure below this. Our findings in cell culture, with concentration 5 µg/L, parallels these human findings in terms of dose response. Although there are limitations in translating concentrations across body compartments without a stronger understanding of pharmacokinetics of BPA, 5 µg/L corresponds to the 90th and 95th percentile of BPA in urine in our human cohort, and allowing for a standard factor of 10 for variability in human sensitivity used when setting TDIs 38 indicates relevance down to at least 0.5 µg/L, below the median urine concentration. Our findings in laboratory animal studies, with exposure of 50 µg/kg bodyweight, are a little higher, as they were designed to correspond to the then current recommendations 36 , 37 , 38 , but implications nevertheless have relevance within the range of exposure in our cohort. Allowing for standard factors of 10 for interspecies variability 38 and variability in human sensitivity 38 , our animal study findings support a TDI at 500 ng/kg or below, which corresponds to the upper 0.5% of our human cohort. The findings of the human study, also allowing for a factor of 10 for variability in human sensitivity 38 , therefore, support a TDI at or below 1.3 ng/kg.

Despite bans on its use in all infant products by the European Union in 2011 and the U.S. FDA in 2012, BPA remains widespread in the environment 66 . The main source of human exposure to BPA is dietary contamination 68 . Bisphenols are used in the production of common food contact materials, and migrate from those materials during use 69 , including polycarbonate food and beverage containers and the epoxy linings of metal food cans, jar lids, and residential drinking water storage tanks and supply systems 64 . Additional sources of exposure include BPA-based dental composites and sealant epoxies, as well as thermal receipts 64 . BPA levels in pregnant women have previously been reported to be higher for young mothers, smokers, lower education, and lower income 70 . A substantial proportion of ASD cases might be prevented at the population level if these findings were causal and prenatal maternal BPA exposure were reduced. Here, exposures in the top quartile of BPA (>2.18 ug/L) correspond to a population attributable fraction (PAF) for males with low aromatase of 12.6% (95% CI 5.8%, 19.0%) although this estimate is imprecise as it is based on low case numbers. The only other available study with data on BPA exposure (>50 ug/L) and ASD provides an estimate in all children of 10.4% 67 . These studies have misclassification issues (e.g., a single urine measure for BPA and, in the Stein et al. study, an ASD diagnosis derived from health care sources 67 ) but these misclassifications are likely non-differential and thus would bias findings towards the null. Additionally, we need to consider that the above findings of RfD/TDI and PAF are based on BPA alone. Factoring in that most exposures occur as part of a chemical mixture adds additional concern 71 . For example prenatal valproic acid exposed mice (an established ASD mouse model) also have a lower brain aromatase expression 72 .

In summary, this multimodal program of work has shown an adverse effect of higher maternal prenatal BPA on the risk of male offspring ASD by a molecular pathway of reduced aromatase function, which plays a key role in sex-specific early brain development. Overall, these findings add to the growing evidence base of adverse neurodevelopmental effects from bisphenol and other manufactured chemical exposure during pregnancy. The case is compelling and supports broader evidence on the need to further reduce BPA exposure, especially in pregnancy. We also envision that our findings will contribute to new interventions for the prevention and/or amelioration of ASD targeting this specific pathophysiological pathway and we have identified one possible neuroprotective agent—10HDA—that has strong laboratory support. This agent now warrants further study, including human safety and efficacy evaluation.

The human Barwon Infant Study cohort study was approved by the Barwon Health Human Research Ethics Committee, and families provided written informed consent. Parents or guardians provided written informed consent at prenatal recruitment and again when the child was 2 years of age. The human Columbia Center for Children’s Environmental Health Mothers and Newborn cohort study was approved by the Institutional Review Boards of Columbia University and the Centers for Disease Control and Prevention, and all participants in the study provided informed consent. All procedures involving mice were approved by the Florey Institute of Neuroscience and Mental Health animal ethics committee and conformed to the Australian National Health and Medical Research Council code of practice for the care and use of animals for scientific purposes and All experiments were designed to minimize the number of animals used, as well as pain and discomfort. This work adheres to the ARRIVE essential 10 guidelines.

The Barwon Infant Study birth cohort

Participants.

From June 2010 to June 2013, a birth cohort of 1074 mother–infant pairs (10 sets of twins) were recruited using an unselected antenatal sampling frame in the Barwon region of Victoria, Australia 28 . Eligibility criteria, population characteristics, and measurement details have been provided previously 28 ; 847 children had prenatal bisphenol A measures available (Supplementary Table  1 ).

Bisphenol A measurement

We used a direct injection liquid chromatography tandem mass spectrometry (LC-MS/MS) method, as previously described in detail 73 . In summary, a 50 µL aliquot of urine was diluted in milli-Q water and combined with isotopically-labeled standards and b-glucuronidase (from E. Coli -K12). Samples were incubated for 90 min at 37 °C to allow for enzymatic hydrolysis of bisphenol conjugates before quenching the reaction with 0.5% formic acid. Samples were centrifuged before analysis, which was performed using a Sciex 6500 + QTRAP in negative electrospray ionization mode. The BPA distribution and quality control attributes for the application of this method to the Barwon Infant Study (BIS) cohort are shown in Supplementary Table  2 .

Child neurodevelopment

Between the ages of seven and ten, a health screen phone call was conducted to gather information on autism spectrum disorder (ASD) diagnoses and symptomology. Out of the 868 individuals who responded to the health screen, 80 had an ASD diagnosis reported by their parents/guardians or were identified as potentially having ASD. The parent-reported diagnoses were confirmed by pediatric audit of the medical documentation to verify an ASD diagnosis as per DSM-5 guidelines. Participants that had a parent-reported diagnosis and then a verified pediatrician diagnosis by 30 June 2023 and whose diagnoses occurred before the date of their 9-year health screen were included as ASD cases in this study’s analyses ( n  = 43). Participants were excluded if (i) their parent/guardian responded with ‘Yes’ or ‘Under Investigation’ to the question of an ASD diagnosis on the year-9 health screen but their diagnosis was not verified by 30 June 2023 ( n  = 26), or (ii) they had a verified diagnosis of ASD by 30 June 2023 but their date of diagnosis did not precede the date of their year-9 health screen ( n  = 15). The DSM-5-oriented autism spectrum problems (ASP) scale of the Child Behavior Checklist for Ages 1.5-5 (CBCL) administered at 2-3 years was also used as an indicator of autism spectrum disorder.

Whole genome SNP arrays

Blood from the umbilical cord was gathered at birth and then transferred into serum coagulation tubes (BD Vacutainer). Following this, the serum was separated using centrifugation as described elsewhere 74 . Genomic DNA was extracted from whole cord blood using the QIAamp DNA QIAcube HT Kit (QIAGEN, Hilden, Germany), following manufacturer’s instructions. Genotypes were measured by Erasmus MC University Medical Center using the Infinium Global Screening Array-24 v1.0 BeadChip (Illumina, San Diego, CA, USA). The Sanger Imputation Service (Wellcome Sanger Institute, Hinxton, UK) was used for imputing SNPs not captured in the initial genotyping using the EAGLE2 + PBWT phasing and imputation pipeline with the Haplotype Reference Consortium reference panel 75 . Detailed methods are provided elsewhere 76 .

Genome-wide DNA methylation arrays and analysis methods can be found in the Supplementary Methods.

Center for Children’s Environmental Health (CCCEH) epigenetic investigations

The study participants consisted of mothers and their children who were part of the prospective cohort at the Columbia Center for Children’s Environmental Health Mothers and Newborn (CCCEH-MN) in New York City (NYC). They were enrolled between the years 1998 and 2003, during which they were pregnant. The age range for these women was between 18 and 35, and they had no prior history of diabetes, hypertension, or HIV. Furthermore, they had not used tobacco or illicit drugs and had initiated prenatal care by the 20th week of their pregnancy. Every participant gave informed consent, and the research received approval from the Institutional Review Boards at Columbia University as well as the Centers for Disease Control and Prevention (CDC) 33 .

Epigenetic methods have been previously described 77 . Briefly, DNA methylation was measured in 432 cord blood samples from the CCCEH-MN cohort using the 450 K array (485,577 CpG sites) and in 264 MN cord blood samples using the EPIC array (866,895 CpG sites) (Illumina, Inc., San Diego, CA, USA).

BPA measures in the CCCEH were based on spot urine samples collected from the mother during pregnancy (range, 24–40 weeks of gestation; mean, 34.0 weeks) 33 , 78 .

Other statistical analysis

Maternal urinary BPA concentrations were corrected for specific gravity to control for differences in urine dilution. Given a high proportion of the sample (46%) had BPA concentrations that were not detected or below the limit of detection (LOD), a dichotomous BPA exposure variable was formed using the 75th percentile as the cut-point. Dichotomizing the measurements in this way is also likely to give similar results regardless of whether indirect or direct analytical methods were used 79 . This is desirable since indirect methods might be flawed and underestimate human exposure to BPA 61 .

To evaluate whether autism spectrum problems at 2 years could be used as a proxy for later ASD diagnosis, receiver operating characteristic (ROC) curve analyses were used. CBCL ASP at age 2 years predicted diagnosed autism strongly at age 4 and moderately at age 9 with an area under the curve of 0.92 (95% CI 0.82, 1.00) and 0.70 (95% CI 0.60, 0.80), respectively.

According to the normative data of the CBCL, T-scores greater than 50 are above the median. Due to a skewed distribution, ASP measurements were dichotomized using this cut point, which has respective positive and negative likelihood ratios of 2.68 and 0.00 in the prediction of verified ASD diagnosis at 4 years and 1.99 and 0.49 in the prediction of verified ASD diagnosis at 9 years.

A CYP19A1 genetic score for aromatase enzyme activity was developed based on five genotypes of single nucleotide polymorphisms (CC of rs12148604, GG of rs4441215, CC of rs11632903, CC of rs752760, AA of rs2445768) that are associated with sex hormone levels 31 . Participants were classified as ‘low activity’ if they were in the top quartile, that is, they had three or more genotypes associated with lower levels of estrogen and as ‘high activity’ otherwise. Conditional logistic regression model analyses investigating the association between prenatal BPA levels and (i) early childhood ASP scores and (ii) verified ASD diagnosis at 9 years were conducted in the full sample, repeated after stratification by child’s sex (assigned at birth based on visible external anatomy), and repeated again after further stratifying by the CYP19A1 genetic score. Matching variables included child’s sex (in the full sample analysis only), ancestry (all four grandparents are Caucasian vs not) and time of day of maternal urine collection (after 2 pm vs before). Within these matched groups, we additionally matched age-9 ASD cases and non-cases based on the date of the health screen and child’s age at the health screen using the following procedure. Each case was matched to a single non-case based on nearest date of and age at health screen. Once all cases had one matched non-case, a second matched non-case was allocated to each case, and so on until all cases had 8 matched non-cases (8 was the most possible in the boys with high aromatase activity sub-sample and so this number was used across all sub-samples). The order by which cases were matched was randomly determined at the start of each cycle.

The guidelines for credible subgroup investigations were followed 80 . Only two categorical subgroup analyses were conducted, and these were informed a priori by previous literature and by initial mouse study findings. The adverse BPA effects in males with low aromatase enzyme activity (as inferred from the CYP19A1 genetic score) were expected to be of higher magnitude, based on the prior probabilities from the laboratory work. A systematic approach was used to evaluate non-causal explanations and build evidence for causal inference, considering pertinent issues such as laboratory artefacts that are common in biomarker and molecular studies 81 .

A second CYP19A1 genetic score was developed for use in sensitivity analyses. The Genotype-Tissue Expression (GTEx) portal was used to identify the top five expression quantitative trait loci (eQTLs) for aromatase in any tissue type that showed a consistent effect direction in brain tissue. A functional genetic score was then computed for each BIS participant by summing the number of aromatase-promoting alleles they carry across the five eQTLs (AA of rs7169770, CC of rs1065778, AA of rs28757202, CC of rs12917091, AA of rs3784307), weighted by their normalized effect size (NES) in amygdala tissue. The score was then reversed so that higher values indicate lower aromatase activity. The score thus captures genetic contribution to reduced cross-tissue aromatase activity with a weighting towards the amygdala, a focus in our animal studies. The variable was dichotomized using the 75th percentile as the cut-point and the above stratified analyses were repeated with this new weighted score replacing the original, unweighted score.

For the human epigenetic investigations, we used multiple linear regression and mediation 60 approaches. As in past work 33 , BPA was classified as greater than 4 μg/L vs less than 1 μg/L in the CCCEH-MN cohort. A comparable classification was used for the BIS cohort, with greater than 4 μg/L vs the rest. In both cohorts the regression and mediation analyses were also adjusted for sex, gestational age, self-reported ethnicity, and cord blood cell proportions. In the BIS cohort, ethnicity was defined as all four grandparents are Caucasian vs not (see Table  S3 ). For the CCCEH-MN cohort, ethnicity was defined as Dominican vs African American 33 . We used statistical software packages R v3.6.3 82 and Stata 15.1 83 .

LABORATORY STUDIES

Shsy-5y cell culture study, bpa treatment on aromatase expression in cell culture.

Human neuroblastoma SHSY-5Y cells were chosen because they were known to express aromatase and SH-SY5Y have been used in ASD research 84 . SHSY-5Y cells (CRL-2266, American Type Culture Collection, Virginia, USA) were maintained in Dulbecco’s Modified Eagle Medium (DMEM) (10313-021, Gibco-life technologies, New York (NY), USA) supplemented with 10% heat-inactivated Fetal Bovine Serum (FBS) (12003C-500 mL, SAFC Biosciences, Kansas, USA), 1% penicillin streptomycin (pen/strep) (15140-122, Gibco-life technologies, NY, USA) and 1% L-Glutamine (Q) (25030-081, Gibco-life technologies, NY, USA) at 37 °C in a humidified atmosphere of 95% air and 5% CO 2 . SHSY-5Y cells were grown in 175 cm 2 cell culture flasks (T-175) (353112, BD Falcon, Pennsylvania, USA). Cells were passaged when the seeding density of the T-175 flasks was reached (roughly 80-90% confluence). Cells were passaged by aspirating media from flasks and flasks were then washed once with 10 mL of DPBS (14190-136 Gibo-life technologies, NY, USA) to remove the FBS (inhibits the actions of trypsin). Next, cells were incubated with trypsin (2 mL/T-175 flask) at 37 °C for 5 min to detach cells from the flask wall. To prevent further action of trypsin, media (8 mL/T-175 flask) was added, and contents were pipetted up and down to disperse cell clumps. The cell suspension was then transferred to a 15 mL centrifuge tube (430791, Corning CentriStar, Massachusetts, USA) and centrifuge (CT15RT, Techcomp, Shanghai, China) for 5 min at 1000 RPM at room temperature (RT). The media was then aspirated from these tubes and the cell pellet resuspended in 1 mL of media. Cell viability counts were performed using a hemocytometer (Hausser Scientific, Pennsylvania, USA) to determine the number of live versus dead cells in solution. Two μL of cell suspension was diluted with media (98 μL) and then trypan blue (100 μL) (T8154, Sigma-Aldrich Co., St. Louis, MO, USA) (which labeled dead cells) in a sterile microcentrifuge tube (MCT-175-C-S, Axygen, California, USA). Ten μL of this solution was loaded into the hemocytometer and imaged using a light microscope (DMIL LED, Leica, Germany). Dead cells appeared blue under the microscope because these cells take up the dye whereas live cells were clear (i.e., unstained). Cells were counted in the outer four squares located in each chamber (two chambers, eight squares), with their dimensions known. The average of the eight counts was multiplied by the dilution factor and by 104, yielding the concentration of cells/mL solution. Average cell counts were plotted against treatment groups using GraphPad Prism 9.0 (GraphPad Software, Inc., San Diego, CA).

Bisphenol A (BPA) (239658-50 G, Sigma-Aldrich Co., St. Louis, MO, USA) was used for cell treatment. Prior to treatment, stock solutions of each drug were prepared as stated below. BPA was dissolved in pure ethanol (EA043-2.5 L, Chem supply, South Australia, Australia) and the final concentration of the stock solutions was 0.0435 g/mL. Cells in T-175 flasks were randomly assigned to receive treatment with BPA at a dosage of 100 μg/L, 50 μg/L, 25 μg/L or 0 μg/L (vehicle). There was also a no treatment (no vehicle added) flask.

Cell treatment, protein assay, SDS-Page, and western blotting methods can be found in the Supplementary Methods.

Animal studies

Two colonies of mice, maintained at the Florey Institute, were used in this study. The Aromatase knockout (ArKO) mouse model and the Aromatase-enhanced fluorescent green protein (Cyp19-EGFP) transgenic mouse model. Animals were monitored daily except for weekends. If animals showed general clinical signs, an animal technician or a vet was consulted for advice and euthanasia performed as required.

Mice were maintained under specific pathogen-free (SPF) conditions on a 12 h day/night cycle, with ad libitum water and soybean-free food (catalog number SF06-053, Glen Forrest Stockfeeders, Glen Forrest, Western Australia, Australia). Facial tissues were provided for nesting material, and no other environmental enrichment was provided. The room temperature ranged from 18 °C-23 °C and the humidity ranged from 45%-55%.

The sex of mice was determined by SRY genotyping if fetal, otherwise sex was determined by examining the anogenital region around PND9 and again at weaning. Sex was confirmed by inspecting the gonads during dissection. Cyp19 -EGFP mice were toe and tail clipped for identification and genotyping at PND9, ArKO mice were ear notched and tailed clipped at two weeks of age. The oligonucleotide sequences (custom oligos, Geneworks, Australia) for ArKO, GFP and SRY genotyping can be found in Supplementary Data File  2 .

Aromatase knockout (ArKO) mouse model

The ArKO mouse is a transgenic model having a disruption of the Cyp19a1 gene. Exon IX of the Cyp19a1 gene was replaced with a neomycin-resistant cassette 41 . Homozygous Knockout (KO) and wild-type (WT) offspring were bred by mating heterozygous (het) ArKO parents and then PCR genotyped. ArKO mice were backcrossed onto a C57BL/6 J background strain, >10 generations (obtained from Animal Resources Centre, Western Australia) and the colony maintained at the Florey institute.

Aromatase-enhanced fluorescent green protein ( Cyp19 -EGFP) transgenic mouse model

The Cyp19-EGFP mouse model (backcrossed onto the FVBN background strain >10 generations, obtained from Animal Resources Centre, Western Australia) is a transgenic model having a bacterial artificial chromosome containing the full length of the Cyp19a1 gene with an Enhanced Green Fluorescent Protein (EGFP) gene inserted upstream of the ATG start codon 11 . Thus, EGFP expression is an endogenous marker for Cyp19a1 expression. This allows for the visualization and subsequent localization of EGFP as the marker for aromatase without the use of potentially nonspecific aromatase antibodies 11 . We have previously characterized this transgenic model and its brain expression of EGFP 11 . Based on our characterization studies, this transgenic model does not have phenotypes that are significantly different to wildtype mice.

Early postnatal 17β-estrodiol treatment

Mice were allocated into three groups: (1) WT mice receiving a sham implantation; (2) ArKO mice receiving a sham implantation; and (3) ArKO mice undergoing implantation with a 17β-estradiol pellet (sourced from Innovative Research America). This estradiol pellet was designed to release 0.2 mg of 17β-estradiol steadily over a period of 6 weeks. A corresponding sham pellet, identical in size but devoid of E2, was implanted in the control groups.

The implantation procedure was carried out on postnatal day 5. For anesthesia, mice were exposed to 2% isoflurane (IsoFlo, Abbott Laboratories, VIC, Australia) within an induction chamber. The efficacy of anesthesia was confirmed by the lack of response to foot-pinch stimuli. During the surgical procedure, mice were maintained on a heated pad to regulate body temperature. A small, 5 mm incision was made in the dorsal region for the subcutaneous insertion of the pellet, preceded by an injection of Bupivacane in the same area. Following the implantation, the incision was carefully sutured. Post-surgery, mice were placed in a thermal cage (Therma-cage, Manchester, UK) for recovery and monitoring until they regained consciousness and could be returned to their respective litters. Any mice exhibiting complications such as opened stitches were excluded from the study.

BPA injection administration treatment

Plugged FVBN dams were randomly assigned, blocking by weight gain at E9.5 and litter/cage where applicable, to receive daily scruff subcutaneous injections (24 G x 1”, Terumo, Somerset, New Jersey, USA) of BPA (239658-50 G, Sigma-Aldrich Co., St Louis, MO, USA) in ethanol and peanut oil (Coles, Victoria, Australia), either between E0.5-E9.5, E10.5-E14.5 or E15.5-birth at a dosage of 50 μg/kg (deemed as the safe consumption level by the Food and Drug Administration, FDA) 37 or 0 μg/kg (vehicle) of maternal body weight. The injection volume was 1.68 μL/g bodyweight. Mice were weighed directly before each injection. BPA and vehicle exposed litters did not differ in litter size (Supplementary Fig.  14 ).

10HDA injection administration treatment

Cyp19 -EGFP or ArKO mice were randomly assigned by blocking on sex and litter to receive daily intraperitoneal injections (31 G x 1”, Terumo, Somerset, New Jersey, USA) of 500 μg/kg 10HDA (Matreya, USA) in saline or vehicle saline for 21 consecutive days. The injection volume was 2.1 μL/g bodyweight. Mice were weighed directly before each injection.

BPA oral administration treatment

Plugged FVBN dams were exposed to jelly at E9.5. The jelly contained 7.5% Cottee’s Raspberry Cordial (Coles, Victoria, Australia) and 1% bacteriological agar (Oxoid, Australia) in milli-Q water. The pH was increased to between 6.5-7.5 with a pallet of NaOH to allow the jelly to set. Dams were then randomly assigned, blocking by weight gain at E9.5 and litter/cage where applicable, to receive a daily dose of jelly, which contained either ethanol or BPA dissolved in ethanol, at a dosage of 50 μg/kg or 0 μg/kg (vehicle) of maternal body weight. Dams received doses between E10.5-E14.5, and only dams that were observed to have consumed all the jelly each day were included in the study.

Behavioral paradigms

Three-chamber social interaction test.

The three-chamber social interaction test is extensively used to investigate juvenile and adult social interaction deficits, including in sociability 85 , 86 . BPA exposed pups were habituated in the experimental room on P21, directly after weaning. Following a two-to-three-day habituation, testing was conducted from P24 to P27, as only a maximum of ten mice could be tested during the light phase per day. ArKO mice treated with estrogen or sham pallet began habituation at PND28-29, with testing at PND31-33. Both male and female mice were tested. Testing was performed in a dedicated room for mouse behavior studies; no other animals were present in the room at the time of acclimatization and testing. The temperature of the room was maintained at approximately 21 °C.

The test apparatus, a three-chambered clear plexiglass, measuring 42 cm x 39 cm x 11 cm, had two partitions creating a left, right (blue zones), and center chamber (green zone) in which mice could freely roam via two 4 cm x 5 cm openings in the partitions (Supplementary Fig.  15 ). The two side chambers contained two empty wire cages. A 1 cm wide zone in front of each wire cage was defined as the interaction zone (yellow zones). The chamber was set on a black table for white mice, and on a white covering for black mice to aide tracking.

Each test consisted of two consecutive 10-min trials, a habituation trial (T1) and a sociability trial (T2). T1 allowed the test mouse to habituate, and any bias for either empty interaction zone was noted. For T2, a C57BL/6 J novel stranger mouse matched with the test mouse for age and gender was introduced into the cage on the opposite side to which the test mouse demonstrated an interaction zone bias. Thus, any evidence of sociability is bolstered as interaction zone bias would have to be overcome.

For each trial, the test mouse began in the center chamber, and its activity, both body center point, and nose point was tracked and quantified by TopScan Lite (Clever Sys Inc., Reston VA, USA). In this study, the key measure extracted was the average duration of the nose point in each interaction zone.

Social approach and sociability were analyzed. We define social approach as the time the test mouse’s nose point was tracked in the stranger cage interaction zone. Sociability is the higher proportion of time the test mouse to spends with the nose point in the stranger cage interaction zone compared to the empty cage interaction zone.

Details on the Y-maze and grooming methods can be found in the Supplementary Methods.

Golgi staining

Mice had not undergone any behavioral testing. For Golgi staining and analysis, Wild Type (WT) and Knockout (ArKO) and Cyp19 -EGFP littermate males (aged P65-P70); one mouse from n  = 3 litters for each genotype) were deeply anesthetized with isopentane rapidly decapitated and fresh whole brain tissues were collected. Brains were first washed with milli-Q water to remove excess blood and then directly placed in the solution obtained from the FD Rapid GolgiStain TM Kit (FD Neuro-Technologies, Inc., MD, USA). Brains were stored at room temperature in the dark and the solutions were replaced after 24 hours, and the tissues were kept in the solution for two weeks. After two weeks, tissues were transferred into solution C for a minimum of 48 hours at room temperature. For sectioning, brains were frozen rapidly by dipping into isopentane pre-cooled with dry ice, and 100 µm thick coronal sections were cut at -22 °C and mounted on 1% gelatin-coated slides. The sections were then air dried in the dark at room temperature. When sections were completely dry, slides were further processed and rinsed with distilled water and placed in the solution provided in the kit for 10 min and washed again with distilled water followed by dehydration for 5 min each in 50%, 75%, 95%, and 100 % ethanol. Sections were further processed in xylene and mounted with Permount.

Neuron Tracing

Neuron tracing was conducted on the amygdala and somatosensory cortex of BPA-exposed mice (exposed ED10.5-14.4) and untreated ArKO mice. Neuron tracing in the amygdala was conducted in both male and female mice, and in the somatosensory cortex, only in male mice. Stained slides were coded to ensure that morphological analysis was conducted by an observer who was blind to the animals’ treatment. Morphological analysis followed a previously described protocol 87 with the following modifications: layer V pyramidal cells of the somatosensory cortex, which were fully impregnated and free of neighboring cells or cellular debris, were randomly selected for analysis (Supplementary Fig.  16 ). Golgi-stained coronal sections containing medial amygdala and somatosensory cortical area were visualized under Olympus BX51 microscope. Neuronal tracing was carried out with the help of Neurolucida and Neuroexplorer software (MicroBright Field Inc., Williston, USA). Up to three pyramidal cells in the MeA and four pyramidal cells in the somatosensory cortex per section over 3 sections (9 (MeA) and 12 (cortex) cells per animal respectively) were sampled 88 , 89 . For Sholl analysis 90 , concentric circles were placed at 10 µm intervals starting from the center of the cell body and the parameters i) total dendritic length (sums of the length of individual branches) of apical and basal dendrites of pyramidal cells and ii) number of spines (protrusions in direct contact with the primary dendrite) and their density (number of spines per 10 µm) were recorded.

Neuron selection criteria: Neurons were selected based on the following criteria. They had to be fully stained, and the cell body had to be in the middle third of the section thickness. The dendrites of the neuron had to be unobscured by the other nearby neuron. Also, neurons had to possess tapering of the majority of the dendrites towards their ends. Representative images of neurons from vehicle and BPA-exposed adult mice can be found in Supplementary Fig.  17 .

Visualizing c-Fos activation to conspecific exposure (amygdala)

Stranger exposure paradigm procedure.

Cyp19-EGFP mice of both sexes as well as male ArKO mice, together with male WT littermates were utilized in this study. Mice had not undergone any other behavior testing. All test mice were acclimatized to the testing room in individual cages for three nights prior to testing. All mice were age P24 on the day of testing, which was performed between 10 am-2 pm. Testing was performed in a dedicated room for mouse behavior studies and no other animals were present in the room at the time of acclimatization and testing. The temperature of the room was maintained at approximately 21 °C.

On the day of testing, each mouse cage containing the isolated test mouse was placed on a stage (a trolley). The lid containing food and water was removed and immediately following, a sex-/age-matched C57Bl/6 J stranger mouse or a novel object (new 1 mL syringe) was placed into the cage and a clean, empty lid was placed on the top. New gloves were used to handle each syringe to avoid transferring another mouse’s olfactory signature to it. The 10 min trial began as soon as the cage lid was shut. After 10 min had elapsed, the stranger or the novel object was removed, the test mouse with home cage was returned to its original location with the original cage lid with food and water for 2 hours prior to perfusion. Once it was established that there was a difference in c-fos expression between stranger exposure and novel object exposure in the medial amygdala, BPA and vehicle exposed (ED10.5-14.5) Cyp19-EGFP mice as well as estrogen and sham pallet treated ArKO and WT mice were exposed to an age and sex matched stranger as described above. C-fos expression was quantified in male mice only.

Histology and stereological analysis methods can be found in the Supplementary Methods.

Neuron count brain collection, staining, brain region delineation and stereology can be found in the Supplementary Methods.

Electrophysiological studies

Microelectrode array electrophysiology.

Male mice aged 8 weeks weighing between 15 and 20 g were used for this study. They had not undergone any behavioral testing prior to electrophysiology. We studied synaptic activity parameters such as the Input/Output (I/O) curve. Stimulation of the glutamatergic synapses terminate in the basolateral amygdala (BLA) and the basomedial amygdala (BMA), which were integrated with multiple inputs that compute to produce an output (field excitatory postsynaptic potential, fEPSP). I/O curve serves as an index of synaptic excitability of large neuronal populations. Mice were anesthetized with isoflurane (IsoFlo TM , Abbott Laboratories, Victoria, Australia) and decapitated. The whole brains were quickly removed and placed in ice-cold, oxygenated (95% O 2 , 5% CO 2 ) cutting solution (composition in mmol/L: 206 sucrose, 3 KCl, 0.5 CaCl 2 , 6 MgCl 2 -H 2 O, 1.25 NaH 2 PO 4 , 25 NaHCO 3, and 10.6 D-glucose). Coronal brain amygdala slices (300 µm) were prepared with a VT 1200 S tissue slicer (Leica) and quickly transferred to 34  o C carbogen bubbled artificial CSF (aCSF) (composition in mmol/L: 126 NaCl, 2.5 KCl, 2.4 CaCl 2 , 1.36 MgCl 2 -H 2 O, 1.25 NaH 2 PO 4 , 25 NaHCO 3, and 10 D-glucose) for 30 min. After further recovery of 1 h equilibrium in oxygenated aCSF at room temperature, the slices were transferred to a submission recording chamber, an MEA chip with 60 electrodes spaced 200 μm apart (60 MEA 200/30 iR-Ti: MCS GnbH, Reutlingen, Germany). The slice was immobilized with a harp grid (ALA Scientific Instruments, New York, USA) and was continuously perfused with carbogenated aCSF (3 mL/min at 32 °C). fEPSPs produced in BLA and BMA were by stimulation of a randomly chosen electrode surrounding the target area with a biphasic voltage waveform (100 μs) at intermediated voltage intensity. The electrode could only be chosen if it produced a fair number of fEPSPs in the surrounding recording electrodes. The width of the EPSP wave ranged from 20 to 30 ms was selected. We chose slices where BLA and BMA were greatly represented according to Allen Mouse Brain Atlas 91 . Care was taken to choose the stimulating electrode in the same region from one slice to the other. The peak-to-peak amplitude of fEPSP in BLA and BMA was recorded by a program of LTP-Director and analyzed using LTP-Analyzer (MCS GnbH, Reutlingen, Germany).

Electrocorticogram (ECoG)

Electrocorticogram recordings.

Male mice aged 8 weeks were used for this study. Mice had not undergone any behavioral testing prior to ECoG recording. For ECoG, surgeries were performed as previously described 92 . Mice were anesthetized with 1–3% isoflurane and two epidural silver ‘ball’ electrodes implanted on each hemisphere of the skull. Electrodes were placed 3 mm lateral of the midline and 0.5 mm, caudal from bregma. A ground electrode was placed 2.5 mm rostral from bregma and 0.5 mm lateral from the midline. Mice were allowed to recover for at least 48 hours after surgery. ECoGs were continuously recorded in freely moving mice for a 4–6-hour period during daylight hours following a standard 30-min habituation period. Signals were band-pass filtered at 0.1 to 40 Hz and sampled at 1 kHz using the Pinnacle EEG/EMG tethered recording system (Pinnacle Technology Inc, KS). Power spectrums were calculated using Hann window with a resolution of 1 Hz using Sirenia Pro analysis software (Pinnacle Technology Inc) on stable 30-min periods of ECoG recordings.

Primary Cortical Cultures

Neuroprotective effect of 10hda against injury induced by bpa on embryonic mouse cortical neurons.

Primary cortical neurons were obtained from male Cyp19 -EGFP mouse embryos at gestational day 15.5. Embryos were genotyped for SRY to determine sex, and only male embryos were used. Cells were seeded in 24-well plates containing 12 mm glass coverslips, coated with 100 µg/mL poly D-lysine to a density of ~0.45 x 10 6 cells/well and incubated in a humidified CO 2 incubator (5% CO 2 , 37 °C). Cells were pre-treated with vehicle (DMSO), 1 mM 10HDA (Matreya, PA,USA), 25 nM BPA and 1 mM 10HDA with 25 nM BPA. For each group, 10 neurons were measured, and the experiments were duplicated. Each replicate was from a separate culture.

Cells were fixed in 4% paraformaldehyde and stained with mouse anti-βIII tubulin monoclonal primary antibody (1:1000; cat #ab41489, Abcam, United Kingdom) and goat anti-mouse secondary antibody, Alexa Fluor 488, (1:2000; cat#A11017; Invitrogen, USA) to label neuronal cells. Aromatase was stained using Rabbit anti-aromatse Antibody (1:2000 cat# A7981; Sigma Aldrich, St. Louis, MO, USA) and donkey anti-rabbit Alexa594 (1:2000; cat# A-21207; Invitrogen, USA). Cell nucleus was stained with Hoechst 33258 solution (Sigma 94403 (2 µg/mL)). Images were captured using an Olympus IX51 microscope (X40 objective). Neurites were quantified using Neurolucida and Neuroexplorer software (MicroBright Field Inc, Williston, USA) as described in the neuron tracing section.

RNA extraction

Total RNA was extracted using PARIS kit (cat#: AM1921, Invitrogen™PARIS™ Kit) according to the protocol supplied by the manufacturer. cDNA libraries were generated using the SureSelect.

Strand-Specific RNA Library Prep for Illumina Multiplexed Sequencing kit (Agilent Technologies, CA, USA), according to manufacturer’s instructions.

In vivo effects of BPA on Fetal brain cortical RNA seq

Pregnant Cyp19-EGFP dams were injected subcutaneously with BPA or vehicle ED10.5-14.5 as described in previous section, and culled on ED15.5 by isoflurane overdosed. Fetuses were harvested and placed in chilled PBS. Embryo brain cortical tissue was dissected from fetuses, snap frozen in liquid nitrogen and stored in −80°C until RNA extraction. The sex of fetuses was determined by visual assessment of the gonads and Sry (a male-specific gene) genotyping. Each RNA seq run, 6 cDNA libraries (derived from total RNA samples with 3 biological samples per group), were analyzed by MidSeq Nano run, 50 bp, Single end read on the Illumina platform. Because of undetectable levels of Cyp19a1 RNA in the fetal brain, Cyp19a1 RNA levels were not included. This is consistent in that Aromatase+ cells represent <0.05% of neurons in the adult mouse brain 93 . Subsequent in vivo transcriptomic analyses were completed in males only. Read quality was then assessed with FastQC. The sequence reads were then aligned against the Mus musculus genome (Build version GRCm38). The Tophat aligner (v2.0.14) was used to map reads to the genomic sequences. Sequencing data were then summarized into reads per transcript using Feature counts 94 . The transcripts were assembled with the StringTie tool v1.2.4 using the reads alignment with Mus_musculus.GRCm38 and reference annotation based assembly option (RABT) using the Gencode gene models for the mouse GRCm38/mm10 genome build. Normalisation and statistical analysis on the count data were executed using EdgeR (version edgeR_3.14.2 in R studio, R version 3.14.2). The data were scaled using trimmed mean of M-values (TMM) 95 and differentially expressed genes between all treatment group (Benjamini–Hochberg false discovery rate >0.1). Differentially expressed genes (DEGs) were identified by comparing mice exposed to 50 μg/kg/day BPA with those exposed to the vehicle.

In vitro effects of 10HDA in primary cell culture RNASeq

Primary brain cortical neurons were obtained from C57BL/6 mouse embryos at GD 15.5. Neuronal cell cultures were treated with vehicle (DMSO) or 1 mM 10HDA (Matreya, PA,USA) as described above. The libraries were sequenced with 50 bp single end reads using an Illumina Hiseq and read quality assessed using FastQC. Untrimmed reads were aligned to mouse mm10 genome using Subjunc aligner (version 1.4.4) within the Subread package 96 . Sequencing data were then summarized into reads per transcript using Feature counts 94 and the Gencode gene models for the mouse GRCm38/mm10 genome build (August 2014 freeze) 97 . Normalisation and statistical analysis on the count data were executed using EdgeR (version edgeR_3.4.2 in R studio, R version 3.0.2) 98 after removing features with less than 10 counts per million for at least 3 of the samples. The data were scaled using trimmed mean of M-values (TMM) 95 and differentially expressed genes between all treatment group (Benjamini–Hochberg false discovery rate >0.1). Annotation was added using the ensemble mouse gene annotation added using bioMart package 99 . Differentially expressed genes were identified by comparing cells exposed to 10HDA with those exposed to the vehicle.

Pathway analysis

The BPA and 10HDA differential expression data for enriched pathways were analyzed using Ingenuity (QIAGEN) and tested against the Canonical Pathway Library, Brain Diseases and Functions Library and the Brain Disorders pathway libraries. We included the top 8 Canonical pathways. Then we included only pathways which were p  < 0.05 in both the BPA and the 10HDA data for the Brain Diseases and Functions pathway libraries, and included all P -values for the Brain Disorders pathway library ( p  > 0.05 are in gray).

An additional analysis of the gene expression data was performed using the c lusterProfileR R package 100 , which provides a range of statistical tests to detect pathways from a query gene set. The test used here was Gene Set Enrichment Analysis (GSEA) 101 , and the genes were tested against the Gene Ontology pathway database (specifically, GO: Biological Process) 102 .

Computational Molecular Docking

The DockThor molecular docking platform 103 was used to assess binding affinities between estrogen receptor beta (encoded by ESR2 gene) and the ligands 17-beta estradiol (E2; the native ligand), BPA, and 10HDA.DockThor takes as input 3D molecular structures for a putative receptor-ligand pair and employs a genetic-algorithm-based optimization strategy to identify optimal binding position within a specified search region. The crystal structures of estrogen receptor alpha (Erα) and beta (Erβ) were sourced from the Protein Data Bank (PDB) with respective PDB IDs: Erα - 5KRI and Erβ - 1YYE. For each ligand, the search grid was restricted to the known estrogen receptor beta ligand-binding domain, centered at x  = 30, y  = 35, z  = 40, with total grid size of x  = 25, y  = 28 and z  = 22. Default settings were used for the optimization procedure.

Statistical analysis

Researchers were blind to treatment during the conduct of the experiment and the outcome assessment but not during statistical analysis.

Mean, standard deviation (SD), and standard error of the mean (SEM) were calculated with GraphPad Prism version 9.4 (GraphPad Software).

Data were tested for equal variances and normality using the Shapiro Wilk test. As electrophysiology, Golgi staining and primary cell culture experiments utilized several data points per animal, observations were not independent, and this non-independence was accounted for in our analyses. We used generalized estimating equations (GEEs) in R version 4.1.2. in a marginal modeling approach that estimates population-averaged effects while treating the covariance structure as a nuisance. We specified the covariance structure as exchangeable (that is, assumed equal correlation between pairs of measurements on the same animal). Given the small number of clusters (i.e. animals), bootstrapped standard errors were estimated using 200 repeats to maintain a conservative type 1 error rate 104 . An interaction term was added to the amygdala Golgi staining study to assess a sex * genotype or sex * BPA exposure interaction.

Where data were normally distributed, parametric tests were conducted. For more than two groups, a one-way ANOVA was conducted with Holm-Sidak post hoc FDR correction, with alpha set to 0.05. Otherwise, unpaired two-tailed Student’s t -tests were used to compare two variables. In cases where normality was not assumed, a Mann-Whitney (comparing two groups) or Kruskal-Wallis with Dunn’s post hoc (comparing three or more groups) was used. In the case of the three-chamber data, a two-way mixed ANOVA was used to assess group x cage side interaction (stranger cage interaction zone vs empty cage interaction zone) with post hoc testing adjusted by the Holm-Sidak method.

Comparisons made are indicated on the Figure legends, and p -values < 0.05 were considered significant. All tests were two-sided (two-tailed) where applicable.

Reporting summary

Further information on research design is available in the  Nature Portfolio Reporting Summary linked to this article.

Data availability

The BIS data including all data used in this paper are available under restricted access for participant privacy. Access can be obtained by request through the BIS Steering Committee by contacting Anne-Louise Ponsonby, The Florey institute of Neuroscience and Mental Health, [email protected]. Requests to access cohort data will be responded to within two weeks. Requests are then considered on scientific and ethical grounds and, if approved, provided under collaborative research agreements. Deidentified cohort data can be provided in Stata or CSV format. Additional project information, including cohort data description and access procedure, is available at the project’s website https://www.barwoninfantstudy.org.au . Source data underlying Figs.  1 – 6 , 8 – 10 and Supplementary Figs.  2 , 3 – 7 , 14 have been provided as a Source Data file with this paper. The RNAseq data discussed in this publication have been deposited in NCBI’s Gene Expression Omnibus 105 and are accessible through GEO Series accession numbers; fetal brain expression with and without prenatal BPA exposure, GSE266401 and primary cortical culture treated with and without 10HDA, GSE266400   Source data are provided with this paper.

American Psychiatric Association aAPAD-TF. Diagnostic and Statistical Manual of Mental Disorders: DSM-5 (American Psychiatric Association, 2013).

Maenner, M. et al. Prevalence of autism spectrum disorder among children aged 8 years—autism and developmental disabilities monitoring network, 11 sites, United States, 2016. MMWR Surveill. Summ. 69 , 1–12 (2020).

Article   PubMed   PubMed Central   Google Scholar  

Li, Q. et al. Prevalence of autism spectrum disorder among children and adolescents in the United States From 2019 to 2020. JAMA Pediatrics 176 , 943–945 (2022).

Atladottir, H. O. et al. The increasing prevalence of reported diagnoses of childhood psychiatric disorders: a descriptive multinational comparison. Eur. Child Adolesc. Psychiatry 24 , 173–183 (2015).

Article   PubMed   Google Scholar  

Lai, M.-C. & Lombardo, M. V. Baron-Cohen S. autism. Lancet 383 , 896–910 (2014).

Boon, W. C., Chow, J. D. & Simpson, E. R. The multiple roles of estrogens and the enzyme aromatase. Prog. Brain Res. 181 , 209–232 (2010).

Article   CAS   PubMed   Google Scholar  

Harada, N. & Honda, S.-i Analysis of spatiotemporal regulation of aromatase in the brain using transgenic mice. J. Steroid Biochem Mol. Biol. 95 , 49–55 (2005).

Tan, W., Zhu, Z., Ye, L. & Leung, L. K. Methylation dictates PI.f-specific CYP19 transcription in human glial cells. Mol. Cell Endocrinol. 452 , 131–137 (2017).

Roselli, C. E., Liu, M. & Hurn, P. D. Brain aromatization: classic roles and new perspectives. Semin Reprod. Med. 27 , 207–217 (2009).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Ruiz-Palmero, I. et al. Oestradiol synthesized by female neurons generates sex differences in neuritogenesis. Sci. Rep. 6 , 31891 (2016).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Stanić, D. et al. Characterization of aromatase expression in the adult male and female mouse brain. I. Coexistence with oestrogen receptors α and β, and androgen receptors. PLoS One 9 , e90451 (2014).

Article   ADS   PubMed   PubMed Central   Google Scholar  

Takahashi, K. et al. Association between aromatase in human brains and personality traits. Sci. Rep. 8 , 16841 (2018).

Baron-Cohen, S. The extreme male brain theory of autism. Trends Cogn. Sci. 6 , 248–254 (2002).

Crider, A., Thakkar, R., Ahmed, A. O. & Pillai, A. Dysregulation of estrogen receptor beta (ERβ), aromatase (CYP19A1), and ER co-activators in the middle frontal gyrus of autism spectrum disorder subjects. Mol. Autism 5 , 46–46 (2014).

Sarachana, T. & Hu, V. W. Genome-wide identification of transcriptional targets of RORA reveals direct regulation of multiple genes associated with autism spectrum disorder. Mol. Autism 4 , 14 (2013).

Santangeli, S. et al. Effects of BPA on female reproductive function: the involvement of epigenetic mechanism. Gen. Comp. Endocrinol. 245 , 122–126 (2017).

Ali, A. A. et al. Developmental vitamin D deficiency increases foetal exposure to testosterone. Mol. Autism 11 , 96 (2020).

Andrade, A. J., Grande, S. W., Talsness, C. E., Grote, K. & Chahoud, I. A dose-response study following in utero and lactational exposure to di-(2-ethylhexyl)-phthalate (DEHP): non-monotonic dose-response and low dose effects on rat brain aromatase activity. Toxicology 227 , 185–192 (2006).

Moosa, A., Shu, H., Sarachana, T. & Hu, V. W. Are endocrine disrupting compounds environmental risk factors for autism spectrum disorder? Hormones Behav. 101 , 13–21 (2018).

Article   CAS   Google Scholar  

Ejaredar, M., Lee, Y., Roberts, D. J., Sauve, R. & Dewey, D. Bisphenol A exposure and children’s behavior: a systematic review. J. Expo. Sci. Environ. Epidemiol. 27 , 175–183 (2017).

Chen, D. et al. Bisphenol analogues other than BPA: environmental occurrence, human exposure, and toxicity—a review. Environ. Sci. Technol. 50 , 5438–5453 (2016).

Article   ADS   CAS   PubMed   Google Scholar  

Hansen, J. B. et al. Prenatal exposure to bisphenol A and autistic- and ADHD-related symptoms in children aged 2 and 5 years from the Odense Child Cohort. Environ. Health 20 , 24 (2021).

Lim, Y. H. et al. Prenatal and postnatal bisphenol A exposure and social impairment in 4-year-old children. Environ. Health 16 , 79 (2017).

Ferri, S. L., Abel, T. & Brodkin, E. S. Sex differences in autism spectrum disorder: a review. Curr. Psychiatry Rep. 20 , 9 (2018).

Suzuki, K. M. et al. Estrogenic activities of fatty acids and a sterol isolated from royal jelly. Evid. Based Complement. Alternat. Med. 5 , 295–302 (2008).

Pirgon, O., Atar, M., Çiriş, M. & Sever, M. Effects of royal jelly supplementation on growth plate zones and longitudinal growth in young rats. Mellifera 19 , 1–13 (2019).

Google Scholar  

Hattori, N., Nomoto, H., Fukumitsu, H., Mishima, S. & Furukawa, S. Royal jelly and its unique fatty acid, 10-hydroxy-trans-2-decenoic acid, promote neurogenesis by neural stem/progenitor cells in vitro. Biomed. Res. 28 , 261–266 (2007).

Vuillermin, P. et al. Cohort profile:the Barwon Infant Study. Int J. Epidemiol. 44 , 1148–1160 (2015).

Achenbach T. M. & Rescorla L. A. Manual for the ASEBA Preschool Forms & Profiles (University of Vermont, Research Center for Children, Youth, & Families, 2000).

Pham, C. et al. Early life environmental factors associated with autism spectrum disorder symptoms in children at age 2 years: a birth cohort study. Autism.: Int. J. Res. Pract. 26 , 1864–1881 (2022).

Article   Google Scholar  

Kidokoro, K. et al. Association between CYP19A1 polymorphisms and sex hormones in postmenopausal Japanese women. J. Hum. Genet. 54 , 78–85 (2009).

Braun, P. R. et al. Genome-wide DNA methylation comparison between live human brain and peripheral tissues within individuals. Transl. Psychiatry 9 , 47 (2019).

Kundakovic, M. et al. DNA methylation of BDNF as a biomarker of early-life adversity. Proc. Natl Acad. Sci. USA 112 , 6807–6813 (2015).

Panickar, K. S., Guan, G., King, M. A., Rajakumar, G. & Simpkins, J. W. 17beta-estradiol attenuates CREB decline in the rat hippocampus following seizure. J. Neurobiol. 33 , 961–967 (1997).

Solum, D. T. & Handa, R. J. Estrogen regulates the development of brain-derived neurotrophic factor mRNA and protein in the rat hippocampus. J. Neurosci. 22 , 2650–2659 (2002).

EPA USEPAU. Bisphenol A; CASRN 80-05-7. Integrated Risk Information System (IRIS) Chemical Assessment Summary (National Center for Environmental Assessment, 1988).

U.S. Food & Drug Administration. Bisphenol A (BPA): Use in Food Contact Application (U.S. Food & Drug Administration, 2023). [cited 2024 Jul]. Available from: https://www.fda.gov/food/food-additives-petitions/bisphenol-bpa-use-food-contact-application .

Authority, E. F. S. Opinion of the Scientific Panel on food additives, flavourings, processing aids and materials in contact with food (AFC) related to 2, 2‐BIS (4‐HYDROXYPHENYL) PROPANE. EFSA J. 5 , 428 (2007).

Semple, B. D., Dixit, S., Shultz, S. R., Boon, W. C. & O’Brien, T. J. Sex-dependent changes in neuronal morphology and psychosocial behaviors after pediatric brain injury. Behav. Brain Res. 319 , 48–62 (2017).

Soma, M. et al. Development of the mouse amygdala as revealed by enhanced green fluorescent protein gene transfer by means of in utero electroporation. J. Comp. Neurol. 513 , 113–128 (2009).

Fisher, C. R., Graves, K. H., Parlow, A. F. & Simpson, E. R. Characterization of mice deficient in aromatase (ArKO) because of targeted disruption of the cyp19 gene. Proc. Natl Acad. Sci. USA 95 , 6965–6970 (1998).

Hill, R. A. et al. Estrogen deficient male mice develop compulsive behavior. Biol. Psychiatry 61 , 359–366 (2007).

Avino, T. A. et al. Neuron numbers increase in the human amygdala from birth to adulthood, but not in autism. Proc. Natl Acad. Sci. USA 115 , 3710–3715 (2018).

Cai, S., Wang, X., Yang, F., Chen, D. & Huang, L. Differences in brain structural covariance network characteristics in children and adults with autism spectrum disorder. Autism Res. 14 , 265–275 (2021).

Keshavarzi, S., Sullivan, R. K. P., Ianno, D. J. & Sah, P. Functional properties and projections of neurons in the medial amygdala. J. Neurosci. 34 , 8699–8715 (2014).

Alia-Klein, N. et al. Human cognitive ability is modulated by aromatase availability in the brain in a sex-specific manner. Front Neurosci. 14 , 565668 (2020).

Biegon, A. et al. Relationship of estrogen synthesis capacity in the brain with obesity and self-control in men and women. Proc. Natl Acad. Sci. USA 117 , 22962–22966 (2020).

Sato, W. & Uono, S. The atypical social brain network in autism: advances in structural and functional MRI studies. Curr. Opin. Neurol. 32 , 617–621 (2019).

Herrington, J. D., Miller, J. S., Pandey, J. & Schultz, R. T. Anxiety and social deficits have distinct relationships with amygdala function in autism spectrum disorder. Soc. Cogn. Affect Neurosci. 11 , 907–914 (2016).

Krukoff, T. L. C-fos expression as a marker of functional activity in the brain. in Cell Neurobiology Techniques. Neuromethods . Vol. 33 213–230 (Humana Press, 1999).

Stoner, R. et al. Patches of disorganization in the neocortex of children with autism. N. Engl. J. Med. 370 , 1209–1219 (2014).

Anthoni, H. et al. The aromatase gene CYP19A1: several genetic and functional lines of evidence supporting a role in reading, speech and language. Behav. Genet. 42 , 509–527 (2012).

Wang, J. et al. Resting state EEG abnormalities in autism spectrum disorders. J. Neurodev. Disord. 5 , 24 (2013).

Matuszczak, E., Komarowska, M. D., Debek, W. & Hermanowicz, A. The impact of Bisphenol A on fertility, reproductive system, and development: a review of the literature. Int. J. Endocrinol. 2019 , 4068717 (2019).

Moutsatsou, P. et al. Fatty acids derived from royal jelly are modulators of estrogen receptor functions. PLoS ONE 5 , e15594 (2010).

Rubin B. S. Bisphenol A: An endocrine disruptor with widespread exposure and multiple effects. J. Steroid. Biochem. Mol. Biol . 127 , 27–34 (2011).

DeRosa, B. A. et al. Convergent pathways in idiopathic autism revealed by time course transcriptomic analysis of patient-derived neurons. Sci. Rep. 8 , 8423 (2018).

Andrews, S. V. et al. Case-control meta-analysis of blood DNA methylation and autism spectrum disorder. Mol. Autism 9 , 40 (2018).

Thongkorn, S. et al. Sex differences in the effects of prenatal bisphenol A exposure on autism-related genes and their relationships with the hippocampus functions. Sci. Rep. 11 , 1241 (2021).

VanderWeele, T. J. Explanation in causal inference: developments in mediation and interaction. Int J. Epidemiol. 45 , 1904–1908 (2016).

PubMed   PubMed Central   Google Scholar  

Gerona, R., vom Saal, F. S. & Hunt, P. A. BPA: have flawed analytical techniques compromised risk assessments? Lancet Diabetes Endocrinol. 8 , 11–13 (2020).

Bolognesi, C. et al. Scientific Opinion on the risks to public health related to the presence of bisphenol A (BPA) in foodstuffs: Executive summary. efsa J. (2015).

Authority E. F. S. Re-evaluation of the Risks to Public Health Related to the Presence of Bisphenol A (BPA) in foodstuffs (European Food Safety Authority, 2021).

EFSA Panel on Food Contact Materials E, et al. Re-evaluation of the risks to public health related to the presence of bisphenol A (BPA) in foodstuffs. EFSA J. 21 , e06857 (2023).

Gao, H. et al. Cumulative risk assessment of phthalates associated with birth outcomes in pregnant Chinese women: a prospective cohort study. Environ. Pollut. 222 , 549–556 (2017).

Welch, C., Mulligan, K. & Does Bisphenol, A. Confer risk of neurodevelopmental disorders? what we have learned from developmental neurotoxicity studies in animal models. Int J. Mol. Sci. 23 , 2894 (2022).

Stein, T. P., Schluter, M. D., Steer, R. A., Guo, L. & Ming, X. Bisphenol A exposure in children with autism spectrum disorders. Autism Res. 8 , 272–283 (2015).

Rudel, R. A. et al. Food Packaging and Bisphenol A and Bis(2-Ethyhexyl) Phthalate Exposure: Findings from a Dietary Intervention. Environ. Health Perspect. 119 , 914–920 (2011).

Geueke, B. et al. Systematic evidence on migrating and extractable food contact chemicals: Most chemicals detected in food contact materials are not listed for use. Criti. Rev. Food Sci. Nutrition 63, 9425–9435 (2022).

Arbuckle, T. E. et al. Exposure to free and conjugated forms of bisphenol a and triclosan among pregnant women in the MIREC cohort. Environ. Health Perspect. 123 , 277–284 (2015).

Caporale, N. et al. From cohorts to molecules: adverse impacts of endocrine disrupting mixtures. Science 375 , eabe8244 (2022).

Hameed, R. A., Ahmed, E. K., Mahmoud, A. A. & Atef, A. A. G protein-coupled estrogen receptor (GPER) selective agonist G1 attenuates the neurobehavioral, molecular and biochemical alterations induced in a valproic acid rat model of autism. Life Sci. 328 , 121860 (2023).

Heffernan, A. L. et al. Harmonizing analytical chemistry and clinical epidemiology for human biomonitoring studies. A case-study of plastic product chemicals in urine. Chemosphere 238 , 124631 (2020).

Burugupalli, S. et al. Ontogeny of circulating lipid metabolism in pregnancy and early childhood—a longitudinal population study. Elife 11 , e72779 (2022).

McCarthy, S. et al. A reference panel of 64,976 haplotypes for genotype imputation. Nat. Genet. 48 , 1279–1283 (2016).

Ponsonby, A. L. et al. Prenatal phthalate exposure, oxidative stress-related genetic vulnerability and early life neurodevelopment: a birth cohort study. Neurotoxicology 80 , 20–28 (2020).

Wang, Y. et al. A methodological pipeline to generate an epigenetic marker of prenatal exposure to air pollution indicators. Epigenetics 17 , 32–40 (2022).

Perera, F. et al. Prenatal bisphenol a exposure and child behavior in an inner-city cohort. Environ. Health Perspect. 120 , 1190–1194 (2012).

Hunt, P. A., Vom Saal, F. S., Stahlhut, R. & Gerona, R. BPA and risk assessment—Authors’ reply. Lancet Diabetes Endocrinol. 8 , 271–272 (2020).

Burke James, F., Sussman Jeremy, B., Kent David, M. & Hayward Rodney, A. Three simple rules to ensure reasonably credible subgroup analyses. BMJ 351 , h5651 (2015).

Ponsonby, A.-L. Reflection on modern methods: building causal evidence within high-dimensional molecular epidemiological studies of moderate size. Int J. Epidemiol. 50 , 1016–1029 (2021).

Team RC. R: A Language and Environment for Statistical Computing (R Foundation for Statistical Computing, 2013).

StataCorp. Stata Statistical Software: Release 14 (StataCorp LP, 2015).

Thongkorn, S. et al. Investigation of autism-related transcription factors underlying sex differences in the effects of bisphenol A on transcriptome profiles and synaptogenesis in the offspring hippocampus. Biol. Sex. Differ. 14 , 8 (2023).

Nadler, J. J. et al. Automated apparatus for quantitation of social approach behaviors in mice. Genes, brain, Behav. 3 , 303–314 (2004).

Moy, S. S. et al. Sociability and preference for social novelty in five inbred strains: an approach to assess autistic-like behavior in mice. Genes, Brain, Behav. 3 , 287–302 (2004).

Champagne, D. L. et al. Maternal care and hippocampal plasticity: evidence for experience-dependent structural plasticity, altered synaptic functioning, and differential responsiveness to glucocorticoids and stress. J. Neurosci. 28 , 6037–6045 (2008).

Wang, I. T., Reyes, A. R. & Zhou, Z. Neuronal morphology in MeCP2 mouse models is intrinsically variable and depends on age, cell type, and Mecp2 mutation. Neurobiol. Dis. 58 , 3–12 (2013).

Kelly, E. A., Opanashuk, L. A. & Majewska, A. K. The effects of postnatal exposure to low-dose bisphenol-A on activity-dependent plasticity in the mouse sensory cortex. Front. Neuroanat. 8 , 117 (2014).

Sholl, D. A. Dendritic organization in the neurons of the visual and motor cortices of the cat. J. Anat. 87 , 387–406 (1953).

ADS   CAS   PubMed   PubMed Central   Google Scholar  

Allen Institute for Brain Science AMBA. https://mouse.brain-map.org/static/atlas (2004).

Tan, H. O. et al. Reduced cortical inhibition in a mouse model of familial childhood absence epilepsy. Proc. Natl Acad. Sci. USA 104 , 17536–17541 (2007).

Unger, E. K. et al. Medial amygdalar aromatase neurons regulate aggression in both sexes. Cell Rep. 10 , 453–462 (2015).

Liao, Y., Smyth, G. K. & Shi, W. featureCounts: an efficient general purpose program for assigning sequence reads to genomic features. Bioinformatics 30 , 923–930 (2013).

Robinson, M. D. & Oshlack, A. A scaling normalization method for differential expression analysis of RNA-seq data. Genome Biol. 11 , R25 (2010).

Liao, Y., Smyth, G. K. & Shi, W. The Subread aligner: fast, accurate and scalable read mapping by seed-and-vote. Nucleic Acids Res. 41 , e108 (2013).

Harrow, J. et al. GENCODE: producing a reference annotation for ENCODE. Genome Biol. 7 , S4.1–9 (2006).

Robinson, M. D., McCarthy, D. J. & Smyth, G. K. edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics 26 , 139–140 (2010).

Durinck, S., Spellman, P. T., Birney, E. & Huber, W. Mapping identifiers for the integration of genomic datasets with the R/Bioconductor package biomaRt. Nat. Protoc. 4 , 1184–1191 (2009).

Wu, T. et al. clusterProfiler 4.0: a universal enrichment tool for interpreting omics data. Innovation 2 , 100141 (2021).

CAS   PubMed   PubMed Central   Google Scholar  

Subramanian, A. et al. Gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles. Proc. Natl Acad. Sci. USA 102 , 15545–15550 (2005).

The Gene Ontology Consortium. The Gene Ontology Resource: 20 years and still GOing strong. Nucleic Acids Res. 47 , D330–d338 (2019).

Santos, K. B., Guedes, I. A., Karl, A. L. M. & Dardenne, L. E. Highly flexible ligand docking: benchmarking of the DockThor Program on the LEADS-PEP protein–peptide data set. J. Chem. Inf. Model 60 , 667–683 (2020).

Leyrat, C., Morgan, K. E., Leurent, B. & Kahan, B. C. Cluster randomized trials with a small number of clusters: which analyses should be used? Int J. Epidemiol. 47 , 321–331 (2018).

Edgar, R., Domrachev, M. & Lash, A. E. Gene Expression Omnibus: NCBI gene expression and hybridization array data repository. Nucleic Acids Res. 30 , 207–210 (2002).

Xiao, X. et al. Bisphenol AP is anti-estrogenic and may cause adverse effects at low doses relevant to human exposure. Environ. Pollut. 242 , 1625–1632 (2018).

Download references

Acknowledgements

The authors thank the BIS participants for their generous contribution to this project. The authors also thank current and past cohort staff. The establishment work and infrastructure for the BIS was provided by the Murdoch Children’s Research Institute, Deakin University, and Barwon Health, supported by the Victorian Government’s Operational Infrastructure Program. We thank all the children and families participating in the study, and the BIS fieldwork team. We acknowledge Barwon Health, Murdoch Children’s Research Institute, and Deakin University for their support in the development of this research. We thank Dr Shanie Landen for statistical advice, and Alex Eisner for independent statistical review of the analyses in the manuscript. We thank Soumini Vijayay and Kristie Thompson for human BPA lab measurement and Dr Steve Cheung for assistance preparing the primary cortical culture. We thank Chitra Chandran, Georgia Cotter, Stephanie Glynn, Oliver Wood and Janxian Ng for manuscript preparation. Manuscript editor Julian Heng (Remotely Consulting, Australia) provided professional editing of this article. This multimodal project was supported by funding from the Minderoo Foundation. Funding was also provided by the National Health and Medical Research Council of Australia (NHMRC), the NHMRC-EU partnership grant for the ENDpoiNT consortium, the Australian Research Council, the Jack Brockhoff Foundation, the Shane O’Brien Memorial Asthma Foundation, the Our Women’s Our Children’s Fund Raising Committee Barwon Health, The Shepherd Foundation, the Rotary Club of Geelong, the Ilhan Food Allergy Foundation, GMHBA Limited, Vanguard Investments Australia Ltd, and the Percy Baxter Charitable Trust, Perpetual Trustees, Fred P Archer Fellowship; the Scobie Trust; Philip Bushell Foundation; Pierce Armstrong Foundation; The Canadian Institutes of Health Research; BioAutism, William and Vera Ellen Houston Memorial Trust Fund, Homer Hack Research Small Grants Scheme and the Medical Research Commercialisation Fund. This work was also supported by Ms. Loh Kia Hui. This project received funding from a NHMRC-EU partner grant with the European Union’s Horizon 2020 Research and Innovation Programme, under Grant Agreement number: 825759 (ENDpoiNTs project). This work was also supported by NHMRC Investigator Fellowships (GTN1175744 to D.B., APP1197234 to A.-L.P., and GRT1193840 to P.S.). The study sponsors were not involved in the collection, analysis, and interpretation of data; writing of the report; or the decision to submit the report for publication.

Author information

Nhi Thao Tran

Present address: The Ritchie Centre, Department of Obstetrics and Gynaecology, School of Clinical Sciences, Monash University, Clayton, Australia

These authors contributed equally: Christos Symeonides, Kristina Vacy.

These authors jointly supervised this work: Anne-Louise Ponsonby, Wah Chin Boon.

Authors and Affiliations

Minderoo Foundation, Perth, Australia

  • Christos Symeonides

Murdoch Children’s Research Institute, Parkville, Australia

Christos Symeonides, Toby Mansell, Martin O’Hely, Boris Novakovic, David Burgner, Mimi L. K. Tang, Richard Saffery, Peter Vuillermin, Fiona Collier, Anne-Louise Ponsonby, Sarath Ranganathan, Lawrence Gray & Anne-Louise Ponsonby

Centre for Community Child Health, Royal Children’s Hospital, Parkville, Australia

Christos Symeonides, Sarath Ranganathan & Anne-Louise Ponsonby

The Florey Institute of Neuroscience and Mental Health, Parkville, Australia

Kristina Vacy, Sarah Thomson, Sam Tanner, Hui Kheng Chua, Shilpi Dixit, Jessalynn Chia, Nhi Thao Tran, Sang Eun Hwang, Feng Chen, Tae Hwan Kim, Christopher A. Reid, Anthony El-Bitar, Gabriel B. Bernasochi, Anne-Louise Ponsonby, Anne-Louise Ponsonby & Wah Chin Boon

School of Population and Global Health, The University of Melbourne, Parkville, Australia

Kristina Vacy

The Hudson Institute of Medical Research, Clayton, Australia

Hui Kheng Chua & Yann W. Yap

Department of Pediatrics, The University of Melbourne, Parkville, Australia

Toby Mansell & David Burgner

School of Medicine, Deakin University, Geelong, Australia

Martin O’Hely, Boris Novakovic, Chloe J. Love, Peter D. Sly, Peter Vuillermin, Fiona Collier & Lawrence Gray

Columbia Center for Children’s Environmental Health, Columbia University, New York, NY, USA

Julie B. Herbstman, Shuang Wang & Jia Guo

Department of Environmental Health Sciences, Columbia University, New York, NY, USA

Julie B. Herbstman

Department of Biostatistics, Columbia University, New York, NY, USA

Shuang Wang & Jia Guo

Department of Anatomy and Developmental Biology, Monash University, Clayton, Australia

Kara Britt & Vincent R. Harley

Breast Cancer Risk and Prevention Laboratory, Peter MacCallum Cancer Centre, Melbourne, Australia

Sir Peter MacCallum Department of Oncology, The University of Melbourne, Melbourne, Australia

Faculty Medicine, Dentistry & Health Sciences, University of Melbourne, Parkville, Australia

Gabriel B. Bernasochi, Lea M. Durham Delbridge, Mimi L. K. Tang, Leonard C. Harrison & Sarath Ranganathan

Sex Development Laboratory, Hudson Institute of Medical Research, Clayton, Australia

Vincent R. Harley & Yann W. Yap

Departments of Paediatrics and Community Health Sciences, The University of Calgary, Calgary, Canada

Deborah Dewey

Barwon Health, Geelong, Australia

Chloe J. Love, Peter Vuillermin, Fiona Collier & Lawrence Gray

Department of General Medicine, Royal Children’s Hospital, Parkville, Australia

David Burgner

Department of Pediatrics, Monash University, Clayton, Australia

Child Health Research Centre, The University of Queensland, Brisbane, Australia

Peter D. Sly

WHO Collaborating Centre for Children’s Health and Environment, Brisbane, Australia

Queensland Alliance for Environmental Health Sciences, The University of Queensland, Brisbane, Australia

Jochen F. Mueller

Monash Krongold Clinic, Faculty of Education, Monash University, Clayton, Australia

Nicole Rinehart

Centre for Developmental Psychiatry and Psychology, Monash University, Clayton, Australia

Bruce Tonge

School of BioSciences, Faculty of Science, The University of Melbourne, Parkville, Australia

Wah Chin Boon

Walter and Eliza Hall Institute, Parkville, Australia

Leonard C. Harrison

You can also search for this author in PubMed   Google Scholar

the BIS Investigator Group

  • , Toby Mansell
  • , Martin O’Hely
  • , David Burgner
  • , Mimi L. K. Tang
  • , Peter D. Sly
  • , Richard Saffery
  • , Jochen F. Mueller
  • , Peter Vuillermin
  • , Fiona Collier
  • , Anne-Louise Ponsonby
  • , Leonard C. Harrison
  • , Sarath Ranganathan
  •  & Lawrence Gray

Contributions

Conceptualization—laboratory experiments: W.C.B., N.R., B.T., L.M.D.D. Laboratory experiments and analysis: W.C.B., K.V., S.D., H.K.C., J.C., F.C., CR, T.K., G.B.B., A.E.-B., S.E.H., N.T.T., K.B. Supervision of lab data collection: W.C.B., K.V., S.D., C.R. Laboratory statistical analysis: W.C.B., K.V., F.C., C.R., S.Th., V.H., Y.W.Y. Design and conduct of the Barwon Infant Study: C.S., A.-L.P., P.V., D.B., P.S., C.L., M.L.K.T., BIS Investigator Group. Design, conduct and analysis of the CCCEH-MN study: J.B.H., S.W., J.G. Design and conduct of BPA study measures in BIS: J.M., C.S., A.-.L.P. Design, conduct, and analysis of gene methylation studies: S.Ta., B.N., T.M., R.S., D.D., A.-L.P. Human studies statistical analysis: C.S., S.Th., A.-.L.P., S.Ta., K.V., M.O.H. Writing—reports and original draft: C.S., K.V., S.Th., S.Ta., A.-.L.P., W.C.B. Writing—editing: all authors. Results interpretation: all authors. Kara Britt did the laboratory experiment—estrogen pellet implantation.

Corresponding author

Correspondence to Wah Chin Boon .

Ethics declarations

Competing interests.

W.C.B. is a co-inventor on ‘Methods of treating neurodevelopmental diseases and disorders’, USA Patent No. US9925163B2, Australian Patent No. 2015271652. This has been licensed to Meizon Innovation Holdings. A.-L.P. is a scientific advisor and W.C.B. is a board member of the Meizon Innovation Holdings. The remaining authors declare no competing interests.

Peer review

Peer review information.

Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. A peer review file is available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary information, peer review file, description of additional supplementary files, supplementary movie 1, supplementary movie 2, supplementary dataset 1, supplementary dataset 2, reporting summary, source data, source data, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Symeonides, C., Vacy, K., Thomson, S. et al. Male autism spectrum disorder is linked to brain aromatase disruption by prenatal BPA in multimodal investigations and 10HDA ameliorates the related mouse phenotype. Nat Commun 15 , 6367 (2024). https://doi.org/10.1038/s41467-024-48897-8

Download citation

Received : 01 December 2022

Accepted : 16 May 2024

Published : 07 August 2024

DOI : https://doi.org/10.1038/s41467-024-48897-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

articles related to research methodology

IMAGES

  1. 15 Research Methodology Examples (2024)

    articles related to research methodology

  2. (PDF) Research Methodology

    articles related to research methodology

  3. Qualitative Research Methodology

    articles related to research methodology

  4. Your Step-by-Step Guide to Writing a Good Research Methodology

    articles related to research methodology

  5. How to write about methodology in a research paper

    articles related to research methodology

  6. 😀 Research report methodology example. Examples of method sections

    articles related to research methodology

COMMENTS

  1. Literature review as a research methodology: An ...

    This paper discusses literature review as a methodology for conducting research and offers an overview of different types of reviews, as well as some guidelines to how to both conduct and evaluate a literature review paper. It also discusses common pitfalls and how to get literature reviews published. 1.

  2. A tutorial on methodological studies: the what, when, how and why

    Methodological studies - studies that evaluate the design, analysis or reporting of other research-related reports - play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.

  3. Case Study Methodology of Qualitative Research: Key Attributes and

    Abstract A case study is one of the most commonly used methodologies of social research. This article attempts to look into the various dimensions of a case study research strategy, the different epistemological strands which determine the particular case study type and approach adopted in the field, discusses the factors which can enhance the effectiveness of a case study research, and the ...

  4. Reviewing the research methods literature: principles and strategies

    Background Overviews of methods are potentially useful means to increase clarity and enhance collective understanding of specific methods topics that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness. This type of review represents a distinct literature synthesis method, although to date, its methodology remains relatively undeveloped despite several aspects that ...

  5. Full article: Methodology or method? A critical review of qualitative

    All three journals published case studies on methods research to illustrate a data collection or analysis technique, methodological procedure, or related issue.

  6. Research Methodology: Logic, Methods and Cases

    A discussion on the differentiation between research methodology and research methods is the focus of the introductory chapter. The concept of research ethics is outlined along with the logic of designing various types of research proposals.

  7. Full article: Why methodology matters

    Understanding the methodology employed in an article is the key to becoming an "unofficial" critical article reviewer. When academicians embark on a study, they are trying to answer a research question. Therefore, they already know what they want to study and why. In order to arrive at a credible answer, however, they need to design a how.

  8. What is research methodology? [Update 2024]

    Having the right research methodology can be a make-or-break factor for your academic work. What is research methodology, and how can you get ahead?

  9. What Is a Research Methodology?

    Your research methodology discusses and explains the data collection and analysis methods you used in your research. A key part of your thesis, dissertation, or research paper, the methodology chapter explains what you did and how you did it, allowing readers to evaluate the reliability and validity of your research and your dissertation topic.

  10. The Use of Research Methods in Psychological Research: A Systematised

    Research methods play an imperative role in research quality as well as educating young researchers, however, the application thereof is unclear which can be detrimental to the field of psychology. Therefore, this systematised review aimed to determine what research methods are being used, how these methods are being used and for what topics in ...

  11. PDF Chapter 1 Introduction to Research Methodology

    The research design is a fundamental aspect of research methodology, outlining the overall strategy and structure of the study. It includes decisions regarding the research type (e.g., descriptive, experimental), the selection of variables, and the determination of the study's scope and timeframe. We must carefully consider the design to ...

  12. Full article: An Empirical Review of Research Methodologies and Methods

    This review examined the distributions of research methodologies and methods in the articles published on five key creativity journals between 2003 and 2012, and further compared them with those in gifted education research, which is a closely-related field of creativity in education.

  13. Research Methodology (Methods, Approaches And Techniques)

    Abstract. Research is driven by a desire to solve real-world problems. All studies are conducted with some real-world application in mind. While pure theoretical knowledge is the goal of business ...

  14. Research Methodology

    Research Methodology refers to the systematic and scientific approach used to conduct research, investigate problems, and gather data and information for a specific purpose. It involves the techniques and procedures used to identify, collect, analyze, and interpret data to answer research questions or solve research problems.

  15. PDF Research Methodologies: An Extensive Overview

    Literature on research Grounded in an extensive literature review this paper discusses quantitative, qualitative and mixed research methodologies, and highlights the pros and cons of each methodology. The attributes of each research methodology also discussed.

  16. (Pdf) Handbook of Research Methodology

    PDF | Research methodology is taught as a supporting subject in several ways in many academic disciplines such as health, education, psychology, social... | Find, read and cite all the research ...

  17. PDF Research Methodology and Methods

    research method is a tool or a technique that is used to gather data (Bailey, 1994). In contrast, a research methodology determines the important rela-tionship between theory and method. It is associated with specific ontological and epistemological views (theory) which may help researchers to select an appropriate research method (Morgan and Smircich, 1980; Bryman, 1984; Laughlin, 1995). For ...

  18. A tutorial on methodological studies: the what, when, how and why

    Background Methodological studies - studies that evaluate the design, analysis or reporting of other research-related reports - play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste. Main body We provide an overview of some of the key aspects of ...

  19. (PDF) Research Methods and Methodology

    PDF | Research is an interesting field of study which every student or scholar has to encounter in the course of time. The objective of this paper is to... | Find, read and cite all the research ...

  20. Choosing the Right Research Methodology: A Guide

    Choosing an optimal research methodology is crucial for the success of any research project. The methodology you select will determine the type of data you collect, how you collect it, and how you analyse it. Understanding the different types of research methods available along with their strengths and weaknesses, is thus imperative to make an ...

  21. PDF Exploring Research Methodology: Review Article

    ABSTRACT Research methodology is a way to systematically solve the research problem. It may be understood as a science of studying how research is done scientifically. In it we study the various steps that are generally adopted by a researcher in studying his research problem along with the logic behind them. It is necessary for the researcher to know not only the research methods/techniques ...

  22. Exploring Research Methodology: Review Article

    Research methodology is a way to systematically solve the research problem. It may be understood as a science of studying how research is done scientifically. In it we study the various steps that are generally adopted by a researcher in studying his

  23. Research on Quantitative Analysis Methods for the Spatial

    Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

  24. ENGL105

    Research Articles. In a research article, an original study is conducted by the authors. They collect and analyze data, sharing their methods and results, and then draw conclusions from their analysis. The kind of study performed can vary (surveys, interviews, experiments, etc.), but in all cases, data is analyzed and a new argument is put forth.

  25. Obesity-related parenting practices, styles, and family functioning: A

    The COVID-19 pandemic resulted in substantial changes to family life. This study examined associations between pandemic conditions and mothers' and fathers' food, physical activity, and media parenting practices and whether these associations were moderated by parenting styles and family functioning. Two independent samples of Canadian parents (nonpandemic n = 270; pandemic n = 357) self ...

  26. (PDF) Research Methods: Issues and Research Direction

    This paper proposes a research plan to investigate the research methods issues (i.e. research design, sampling methods, data collection methods, data analysis techniques, measurement scales, and ...

  27. Full article: Review of Research and Evaluation in Education and

    In Chapter 8, we learn all about qualitative research methods and why they are so important for understanding human experiences, which are complicated and subtle, and which quantitative methods tend to miss. ... Related research . People also read lists articles that other readers of this article have read.

  28. 9 Best Marketing Research Methods to Know Your Buyer Better [+ Examples]

    What marketing research methods are right for you? Here are nine of the most effective, when you should use them, and how to set them up for success.

  29. Male autism spectrum disorder is linked to brain aromatase ...

    Prenatal bisphenol A exposure is associated with an increased risk of ASD in boys through a mechanism involving aromatase suppression. These resulting ASD-related behaviors and brain abnormalities ...

  30. New York City's Fireworks Display Prompts Temporary Surge of Air

    Related Articles. Research, Press Releases. Distinct Pattern in Protein Production Can Predict Severe Side Effects from Skin Cancer Treatment The method could be used to identify patients most likely to get severe side effects from treatment. August 8, 2024. ...