Critical analysis examples of theories
The following sentences are examples of the phrases used to explain strengths and weaknesses.
Smith’s (2005) theory appears up to date, practical and applicable across many divergent settings.
Brown’s (2010) theory, although parsimonious and logical, lacks a sufficient body of evidence to support its propositions and predictions
Little scientific evidence has been presented to support the premises of this theory.
One of the limitations with this theory is that it does not explain why…
A significant strength of this model is that it takes into account …
The propositions of this model appear unambiguous and logical.
A key problem with this framework is the conceptual inconsistency between ….
The table below summarizes the criteria for judging the strengths and weaknesses of a concept:
Evaluating Concepts
Key variables or constructs identified | key variables or constructs omitted or missed |
Clear, well-defined, specific, precise | ambiguous, vague, ill-defined, overly general, imprecise, not sufficiently distinctive overinclusive, too broad, or narrowly defined |
Meaningful, useful | conceptually flawed |
Logical | contradictory |
Relevant | questionable relevance |
Up-to-date | out of date |
Critical analysis examples of concepts
Many researchers have used the concept of control in different ways.
There is little consensus about what constitutes automaticity.
Putting forth a very general definition of motivation means that it is possible that any behaviour could be included.
The concept of global education lacks clarity, is imprecisely defined and is overly complex.
Some have questioned the usefulness of resilience as a concept because it has been used so often and in so many contexts.
Research suggests that the concept of preoperative fasting is an outdated clinical approach.
The table below summarizes the criteria for judging the strengths and weaknesses of an argument, viewpoint or idea:
Evaluating Arguments, Views or Ideas
Reasons and evidence provided support the argument | the reasons or evidence do not support the argument - overgeneralization |
Substantiated (supported) by factual evidence | insufficient substantiation (support) |
Evidence is relevant and believable | Based on peripheral or irrelevant evidence |
Unbiased: sufficient or important evidence or ideas included and considered. | biased: overlooks, omits, disregards, or is selective with important or relevant evidence or ideas. |
Evidence from reputable or authoritative sources | evidence relies on non reputable or unrecognized sources |
Balanced: considers opposing views | unbalanced: does not consider opposing views |
Clear, not confused, unambiguous | confused, ambiguous |
Logical, consistent | the reasons do not follow logically from and support the arguments; arguments or ideas are inconsistent |
Convincing | unconvincing |
Critical analysis examples of arguments, viewpoints or ideas
The validity of this argument is questionable as there is insufficient evidence to support it.
Many writers have challenged Jones’ claim on the grounds that …….
This argument fails to draw on the evidence of others in the field.
This explanation is incomplete because it does not explain why…
The key problem with this explanation is that ……
The existing accounts fail to resolve the contradiction between …
However, there is an inconsistency with this argument. The inconsistency lies in…
Although this argument has been proposed by some, it lacks justification.
However, the body of evidence showing that… contradicts this argument.
The table below provides the criteria for judging the strengths and weaknesses of methodology.
An evaluation of a methodology usually involves a critical analysis of its main sections:
design; sampling (participants); measurement tools and materials; procedure
Evaluating a Methodology
Research design tests the hypotheses or research questions | research design is inappropriate for the hypotheses or research questions |
Valid and reliable method | dubious, questionable validity |
The method addresses potential sources of bias or measurement error. confounding variables were identified | insufficiently rigorous measurement error produces questionable or unreliable confounding variables not identified or addressed |
The method (sample, measurement tools, procedure) allows results to be generalized or transferred. Sampling was representative to enable generalization | generalizability of the results is limited due to an unrepresentative sample: small sample size or limited sample range |
Sampling of cohort was representative to enable generalization sampling of phenomena under investigation sufficiently wide and representative sampling response rate was sufficiently high | limited generalizability of results due to unrepresentative sample: small sample size or limited sample range of cohort or phenomena under investigation sampling response rate was too low |
Measurement tool(s) / instrument(s), appropriate, reliable and valid measurements were accurate | inappropriate measurement tools; incomplete or ambiguous scale items inaccurate measurement reliability statistics from previous research for measurement tool not reported measurement instrument items are ambiguous, unclear, contradictory |
Procedure reliable and valid | Measurement error from administration of the measurement tool(s) |
Method was clearly explained and sufficiently detailed to allow replication | Explanation of the methodology (or parts of it, for example the Procedure) is unclear, confused, imprecise, ambiguous, inconsistent or contradictory |
Critical analysis examples of a methodology
The unrepresentativeness of the sample makes these results misleading.
The presence of unmeasured variables in this study limits the interpretation of the results.
Other, unmeasured confounding variables may be influencing this association.
The interpretation of the data requires caution because the effect of confounding variables was not taken into account.
The insufficient control of several response biases in this study means the results are likely to be unreliable.
Although this correlational study shows association between the variables, it does not establish a causal relationship.
Taken together, the methodological shortcomings of this study suggest the need for serious caution in the meaningful interpretation of the study’s results.
The table below provides the criteria for judging the strengths and weaknesses of research results and conclusions:
Evaluating the Results and Conclusions
Chose and used appropriate statistics | inappropriate choice or use of statistics |
Results interpreted correctly or accurately | incorrect interpretation of results the results have been over-interpreted For example: correlation measures have been incorrectly interpreted to suggest causation rather than association |
All results were explained, including inconsistent or misleading results | inconsistent or misleading results not explained |
Alternative explanations for results were considered | unbalanced explanations: alternative explanations for results not explored |
Significance of all results were considered | incomplete consideration of results |
Results considered according to consistency with other research or viewpoints Results are conclusive because they have been replicated by other studies | consistency of results with other research not considered results are suggestive rather than conclusive because they have not been replicated by other studies |
Results add significantly to existing understanding or knowledge | results do not significantly add to existing understanding knowledge |
Limitations of the research design or method are acknowledged | limitations of the research design or method not considered |
Results were clearly explained, sufficiently detailed, consistent | results were unclear, insufficiently detailed, inconsistent, confusing, ambiguous, contradictory |
Conclusions were consistent with and supported by the results | conclusions were not consistent with or not supported by the results |
Click here to cancel reply.
You must be logged in to post a comment.
Website design and development by Caboodle Web
What is in this guide, definitions, putting it together, tips and examples of critques.
This guide is meant to help you understand the basics of writing a critical analysis. A critical analysis is an argument about a particular piece of media. There are typically two parts: (1) identify and explain the argument the author is making, and (2), provide your own argument about that argument. Your instructor may have very specific requirements on how you are to write your critical analysis, so make sure you read your assignment carefully.
A deep approach to your understanding of a piece of media by relating new knowledge to what you already know.
What does critical research mean, what is an example of critical qualitative research, approaches to critical theory.
Critical research was created out of a need to examine power , inequities, and the resulting societal implications on the status quo in society. It is a necessary departure from traditional scientific research in that it looks beyond what is directly observable to analyze the social world and develop social theory from novel perspectives to address previous injustices. In this article, we'll look at what critical theory entails for qualitative research , as well as the different strands that make up critical research.
In specific terms, critical research examines the nature of power dynamics influencing the social world. More broadly, this has implications for understanding inequality and disparity across cleavages of race, gender, ethnicity, sexual orientation, and economic class, among other differences in identity.
While there are many different strands to critical research, there are a number of common characteristics that are shared by scholars of critical theory:
Qualitative data analysis is easier and more intuitive than ever with ATLAS.ti. Download a free trial.
One of the more famous studies to produce a critical analysis is the doll test first devised by Mamie Clark, then conducted with husband Kenneth Clark starting in the 1940s and replicated in later years. In the doll test, children were asked how they felt about dolls that were put in front of them. The children preferred to play with the dolls that looked white rather than the dolls that looked black, and had more positive views about the white-looking dolls. Children who were black also tended to share the same perception of black-looking dolls, which suggested that their surrounding environment - particularly the school system but more broadly the culture around them - profoundly impacted them by reinforcing negative stereotypes about racial minorities.
Critical theorists argue that such stereotypes, especially when perpetuated by institutions like education and mass media, further contribute to economic and social disparities when children of color experience exposure to negative attitudes about race and ethnicity. This novel research provided fundamental insights that led to the following real-world changes:
Here are some of the various forms of critical research. Keep in mind that these approaches are not exclusive to each other, though they have their own distinct focus to shed light on specific issues relevant to the social sciences, nor are they exhaustive of the entire array of critical theory.
Powerful tools to draw insights from your data is just a few clicks away. Get started with a free trial.
Ask A Librarian
chat Text: 1-308-210-3865 email Librarians by Subject Make an Appointment
Critical analysis may or may not be a component of this particular course's evaluation, but it is an important component of any research process.
Inquiry-based learning
Critical thinking is at the heart of scientific inquiry. A good scientist is one who never stops asking why things happen, or how things happen. Science makes progress when we find data that contradicts our current scientific ideas.
Scientific inquiry includes three key areas:
1. Identifying a problem and asking questions about that problem 2. Selecting information to respond to the problem and evaluating it 3. Drawing conclusions from the evidence
Hart, T. (2018, 18 October) Teaching critical thinking in science - the key to students' future success. Brighter Thinking Blog . https://www.cambridge.org/us/education/blog/2018/10/18/teaching-critical-thinking-science-key-students-future-success/
2508 11th Avenue, Kearney, NE 68849-2240
Circulation Desk: 308-865-8599 Main Office: 308-865-8535
Ask A Librarian
Intended for healthcare professionals
Six key questions will help readers to assess qualitative research
Appraising qualitative research is different from appraising quantitative research
Qualitative research papers should show appropriate sampling, data collection, and data analysis
Transferability of qualitative research depends on context and may be enhanced by using theory
Ethics in qualitative research goes beyond review boards’ requirements to involve complex issues of confidentiality, reflexivity, and power
Over the past decade, readers of medical journals have gained skills in critically appraising studies to determine whether the results can be trusted and applied to their own practice settings. Criteria have been designed to assess studies that use quantitative methods, and these are now in common use.
In this article we offer guidance for readers on how to assess a study that uses qualitative research methods by providing six key questions to ask when reading qualitative research (box 1). However, the thorough assessment of qualitative research is an interpretive act and requires informed reflective thought rather than the simple application of a scoring system.
Was the sample used in the study appropriate to its research question.
Were the data collected appropriately?
Were the data analysed appropriately?
Can I transfer the results of this study to my own setting?
Does the study adequately address potential ethical issues, including reflexivity?
Overall: is what the researchers did clear?
One of the critical decisions in a qualitative study is whom or what to include in the sample—whom to interview, whom to observe, what texts to analyse. An understanding that qualitative research is based in experience and in the construction of meaning, combined with the specific research question, should guide the sampling process. For example, a study of the experience of survivors of domestic violence that examined their reasons for not seeking help from healthcare providers might focus on interviewing a …
BMA Member Log In
If you have a subscription to The BMJ, log in:
Subscribe from £184 *.
Subscribe and get access to all BMJ articles, and much more.
* For online subscription
Access this article for 1 day for: £50 / $60/ €56 ( excludes VAT )
You can download a PDF version for your personal record.
Buy this article
ON YOUR 1ST ORDER
By Laura Brown on 29th May 2023
Conducting a critical analysis of a research paper includes the evaluation of its methodology, data sources, and findings. Alongside, it is necessary to assess the paper’s strengths and weaknesses, identify any biases or limitations, and examine its contribution to the respective field. Additionally, considering alternative interpretations and potential implications is key to providing a comprehensive analysis.
The art of critical analysis is a crucial skill for researchers and scholars alike. It allows us to delve deeper, question assumptions, and uncover the strengths and weaknesses of a research paper. This blog covers the essential steps to master the art of conducting a critical evaluation along with the examples.
Research papers serve as a foundation for advancing knowledge and shaping academic discourse. By critically analysing these papers, we can assess their validity, identify their contributions, and even influence the direction of future research. Throughout this post, we will guide you through the process of understanding a research paper, evaluating its strengths and weaknesses, assessing its contribution, formulating your analysis, considering alternative perspectives, and providing recommendations.
Whether you’re a student, a researcher, or an avid reader of scholarly work, developing the ability to critically analyse a research paper will enhance your understanding and engagement with academic literature and scientific articles. Let’s dive into the world of critical analysis and unlock the secret insights as you buy research paper from us or read this handy guide.
To effectively analyse a research paper, it is crucial to gain a comprehensive understanding of its content. You may begin by thoroughly reading the paper and paying attention to every detail. Further, you should identify the main research question or objective that the study aims to address. This will provide you with a focal point for your analysis.
Now, familiarise yourself with the methodology used and the data collected for the research. Moreover, evaluate the appropriateness and reliability of the chosen methodology, and assess the quality of the data collection and analysis. Understanding these aspects will help you gauge the validity and firmness of the research.
Additionally, take note of the key findings and conclusions presented in the paper and Analyse the supporting evidence along with evaluating the conclusions align with the research objectives. You should also consider any limitations or potential biases that might affect the interpretation of the results. By thoroughly understanding the scientific paper, you will lay a solid foundation for your critical analysis. In case you face any difficulty understanding the paper, you can always contact research paper service anytime, we will definitely help you.
In order to conduct a comprehensive critical analysis on research paper, it is essential to identify its strengths and weaknesses . Here are key aspects to consider during this evaluation process.
First, assess whether it follows a logical flow and if the sections are well-developed and interconnected. Remember, a well-structured paper enhances readability and comprehension.
Next, look for concise statements and a logical progression of ideas. Moreover, analyse how well the author supports their arguments with relevant evidence and whether the reasoning is sound.
Further, analyse the relevance of the data and sources used. You should examine the quality and appropriateness of the cited sources . Also, look at the facts presented if they adequately supports the claims made by the author and whether there is a robust foundation for the conclusions drawn.
Now, this is the time to identify the strengths and weaknesses of the research methods used. At this moment, you should also consider any limitations that may impact the validity or generalizability of the findings.
Finally, consider the author’s affiliations, funding sources, or personal beliefs that could influence the research outcomes.
It is crucial to have a deeper look into the contribution while critically analysing a research paper. You may go through the following steps for critical evaluation.
Firstly, determine whether the paper presents new ideas, approaches, or insights that contribute to the field. Additonally, assess its potential to advance knowledge and fill gaps in existing research.
Secondly, evaluate how the paper builds upon or challenges existing theories, concepts, or methodologies along with assessing its potential to expand understanding or provide novelty.
Finally, analyse how the research paper’s findings may influence practice, policy, or future research directions. Also, consider the broader implications and relevance of the research within the context of the field or society.
Formulating a strong and insightful analysis is a crucial aspect of research paper critical analysis. To effectively present your analysis, follow the below-mentioned steps:
Let’s see a critical analysis research paper example for initiating your analysis with a thesis statement.
The research paper’s findings on the impact of deforestation are valuable, but its failure to address socio-economic factors limits its comprehensive understanding of the issue.
In a critical analysis of a scientific article or research paper it is essential to consider alternative perspectives to present a well-rounded evaluation. Follow these steps to effectively engage with different viewpoints.
Certainly! Here’s a critical evaluation of a research paper example for considering alternative perspectives in the context of a research paper on climate change:
It becomes evident that the paper’s findings on the impact of deforestation are valuable. The research provides insights into the ecological consequences and loss of biodiversity resulting from deforestation. However, a crucial limitation of the paper lies in its failure to address socio-economic factors. By neglecting the socio-economic aspects, such as the role of industries, government policies, and societal behaviours, the research paper lacks a comprehensive understanding of the issue. To gain a holistic understanding, it is recommended to consult the following additional resources.
Here you can present various resources as you need.
Considering critical analysis in a research paper, it is important to go beyond evaluating the strengths and weaknesses and offer constructive recommendations for improvement. Here’s a research paper example of how this section could be written.
Based on the critical analysis of the research paper on renewable energy sources, several recommendations emerge. Firstly, the paper could benefit from a more comprehensive discussion of the economic viability of renewable technologies. Incorporating an analysis of cost-effectiveness and potential financing models would strengthen the paper’s practical implications. Secondly, the authors should consider addressing potential limitations and uncertainties associated with the data sources used. Providing transparency and acknowledging any gaps would enhance the overall credibility of the research. Lastly, there is a need for further investigation into the social acceptance and adoption of renewable energy technologies, as understanding the human dimension is crucial for successful implementation. By offering these recommendations, the research paper can be enhanced and contribute more effectively to the field.
Students often ask how to write the conclusion of a report and critical analysis; here is how it is done. The conclusion of a critical analysis of scientific literature or research paper should succinctly summarise the key points and analysis, emphasising the significance of critical thinking. It should reinforce the importance of addressing any limitations or gaps in the research and encourage further exploration. The conclusion should leave readers with a clear understanding of the paper’s strengths and weaknesses, and inspire them to apply critical analysis principles in their own research endeavours. Here is an example of critical analysis of a research paper in regards to conclusion.
The critical analysis of the research paper on climate change brings to light the importance of addressing socio-economic factors for a comprehensive understanding of the issue. While the paper’s findings on the impact of deforestation are valuable, the omission of socio-economic considerations limits its applicability in developing effective solutions. It is crucial for future research to incorporate the interplay between environmental and socio-economic factors to devise holistic strategies. By recognising and rectifying these gaps, researchers can contribute to a more nuanced understanding of climate change and inform policies that foster sustainable development and resilience.
For readers seeking further exploration and a deeper understanding of the research paper, you can also put up some additional resources . However, this is not the part of the critical analysis, but still you can include it.
Here are 10 points for you as a summary of this blog. You may also consider it as a critical analysis of a research paper checklist while you prepare to conduct it.
Follow this research paper checklist for critically analysing a research paper, and you will definitely rock it.
Laura Brown, a senior content writer who writes actionable blogs at Crowd Writer.
Last Updated: August 3, 2024 Fact Checked
This article was co-authored by Jake Adams . Jake Adams is an academic tutor and the owner of Simplifi EDU, a Santa Monica, California based online tutoring business offering learning resources and online tutors for academic subjects K-College, SAT & ACT prep, and college admissions applications. With over 14 years of professional tutoring experience, Jake is dedicated to providing his clients the very best online tutoring experience and access to a network of excellent undergraduate and graduate-level tutors from top colleges all over the nation. Jake holds a BS in International Business and Marketing from Pepperdine University. This article has been fact-checked, ensuring the accuracy of any cited facts and confirming the authority of its sources. This article has been viewed 1,467,705 times.
When writing a critical analysis, take a moment to reflect on the source material and the author's main ideas to come up with your thesis statement . Be sure to write down your own responses to the points the author was making, and respond to each in a paragraph.
Tip : Keep in mind that you can also have a positive critique of the text if you think it was effective. For example, if the author’s description of greenhouse gasses was written in simple, easy to understand language, you might note this as part of your analysis.
Tip : Check with your teacher for details on how to cite sources. They may want you to use a specific citation style, such as MLA, Chicago, or APA.
To write a critical analysis, first introduce the work you’re analyzing, including information about the work’s author and their purpose in writing it. As part of the introduction, briefly state your overall evaluation of the work. Then, summarize the author’s key points before you use the bulk of your paper to provide your full critique of the work. Try to put each point you want to make in a separate paragraph for clarity. Finally, write a concluding paragraph that restates your opinion of the work and offers any suggestions for improvement. To learn how to balance positive and negative comments in your critical analysis, keep reading! Did this summary help you? Yes No
Maggie Oosan
Mar 8, 2016
Christine Renee
Dec 4, 2016
Trinity Rajlakshmi
May 2, 2019
Oct 3, 2019
Christina Lefoka
Nov 27, 2019
wikiHow Tech Help Pro:
Develop the tech skills you need for work and life
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
Methodology
Published on August 23, 2019 by Amy Luo . Revised on June 22, 2023.
Critical discourse analysis (or discourse analysis) is a research method for studying written or spoken language in relation to its social context. It aims to understand how language is used in real life situations.
When you conduct discourse analysis, you might focus on:
Discourse analysis is a common qualitative research method in many humanities and social science disciplines, including linguistics, sociology, anthropology, psychology and cultural studies.
What is discourse analysis used for, how is discourse analysis different from other methods, how to conduct discourse analysis, other interesting articles.
Conducting discourse analysis means examining how language functions and how meaning is created in different social contexts. It can be applied to any instance of written or oral language, as well as non-verbal aspects of communication such as tone and gestures.
Materials that are suitable for discourse analysis include:
By analyzing these types of discourse, researchers aim to gain an understanding of social groups and how they communicate.
Professional editors proofread and edit your paper by focusing on:
See an example
Unlike linguistic approaches that focus only on the rules of language use, discourse analysis emphasizes the contextual meaning of language.
It focuses on the social aspects of communication and the ways people use language to achieve specific effects (e.g. to build trust, to create doubt, to evoke emotions, or to manage conflict).
Instead of focusing on smaller units of language, such as sounds, words or phrases, discourse analysis is used to study larger chunks of language, such as entire conversations, texts, or collections of texts. The selected sources can be analyzed on multiple levels.
Level of communication | What is analyzed? |
---|---|
Vocabulary | Words and phrases can be analyzed for ideological associations, formality, and euphemistic and metaphorical content. |
Grammar | The way that sentences are constructed (e.g., , active or passive construction, and the use of imperatives and questions) can reveal aspects of intended meaning. |
Structure | The structure of a text can be analyzed for how it creates emphasis or builds a narrative. |
Genre | Texts can be analyzed in relation to the conventions and communicative aims of their genre (e.g., political speeches or tabloid newspaper articles). |
Non-verbal communication | Non-verbal aspects of speech, such as tone of voice, pauses, gestures, and sounds like “um”, can reveal aspects of a speaker’s intentions, attitudes, and emotions. |
Conversational codes | The interaction between people in a conversation, such as turn-taking, interruptions and listener response, can reveal aspects of cultural conventions and social roles. |
Discourse analysis is a qualitative and interpretive method of analyzing texts (in contrast to more systematic methods like content analysis ). You make interpretations based on both the details of the material itself and on contextual knowledge.
There are many different approaches and techniques you can use to conduct discourse analysis, but the steps below outline the basic structure you need to follow. Following these steps can help you avoid pitfalls of confirmation bias that can cloud your analysis.
To do discourse analysis, you begin with a clearly defined research question . Once you have developed your question, select a range of material that is appropriate to answer it.
Discourse analysis is a method that can be applied both to large volumes of material and to smaller samples, depending on the aims and timescale of your research.
Next, you must establish the social and historical context in which the material was produced and intended to be received. Gather factual details of when and where the content was created, who the author is, who published it, and whom it was disseminated to.
As well as understanding the real-life context of the discourse, you can also conduct a literature review on the topic and construct a theoretical framework to guide your analysis.
This step involves closely examining various elements of the material – such as words, sentences, paragraphs, and overall structure – and relating them to attributes, themes, and patterns relevant to your research question.
Once you have assigned particular attributes to elements of the material, reflect on your results to examine the function and meaning of the language used. Here, you will consider your analysis in relation to the broader context that you established earlier to draw conclusions that answer your research question.
If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.
Research bias
If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
Luo, A. (2023, June 22). Critical Discourse Analysis | Definition, Guide & Examples. Scribbr. Retrieved August 6, 2024, from https://www.scribbr.com/methodology/discourse-analysis/
What is qualitative research | methods & examples, what is a case study | definition, examples & methods, how to do thematic analysis | step-by-step guide & examples, get unlimited documents corrected.
✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts
Chris Drew (PhD)
Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]
Learn about our Editorial Process
Critical analysis refers to the ability to examine something in detail in preparation to make an evaluation or judgment.
It will involve exploring underlying assumptions, theories, arguments, evidence, logic, biases, contextual factors, and so forth, that could help shed more light on the topic.
In essay writing, a critical analysis essay will involve using a range of analytical skills to explore a topic, such as:
If you’re writing an essay, you could also watch my guide on how to write a critical analysis essay below, and don’t forget to grab your worksheets and critical analysis essay plan to save yourself a ton of time:
Grab your Critical Analysis Worksheets and Essay Plan Here
1. exploring strengths and weaknesses.
Perhaps the first and most straightforward method of critical analysis is to create a simple strengths-vs-weaknesses comparison.
Most things have both strengths and weaknesses – you could even do this for yourself! What are your strengths? Maybe you’re kind or good at sports or good with children. What are your weaknesses? Maybe you struggle with essay writing or concentration.
If you can analyze your own strengths and weaknesses, then you understand the concept. What might be the strengths and weaknesses of the idea you’re hoping to critically analyze?
Strengths and weaknesses could include:
You may consider using a SWOT analysis for this step. I’ve provided a SWOT analysis guide here .
Evaluation of sources refers to looking at whether a source is reliable or unreliable.
This is a fundamental media literacy skill .
Steps involved in evaluating sources include asking questions like:
For more on this topic, I’d recommend my detailed guide on digital literacy .
Identifying similarities encompasses the act of drawing parallels between elements, concepts, or issues.
In critical analysis, it’s common to compare a given article, idea, or theory to another one. In this way, you can identify areas in which they are alike.
Determining similarities can be a challenge, but it’s an intellectual exercise that fosters a greater understanding of the aspects you’re studying. This step often calls for a careful reading and note-taking to highlight matching information, points of view, arguments or even suggested solutions.
Similarities might be found in:
Remember, the intention of identifying similarities is not to prove one right or wrong. Rather, it sets the foundation for understanding the larger context of your analysis, anchoring your arguments in a broader spectrum of ideas.
Your critical analysis strengthens when you can see the patterns and connections across different works or topics. It fosters a more comprehensive, insightful perspective. And importantly, it is a stepping stone in your analysis journey towards evaluating differences, which is equally imperative and insightful in any analysis.
Identifying differences involves pinpointing the unique aspects, viewpoints or solutions introduced by the text you’re analyzing. How does it stand out as different from other texts?
To do this, you’ll need to compare this text to another text.
Differences can be revealed in:
Identifying differences helps to reveal the multiplicity of perspectives and approaches on a given topic. Doing so provides a more in-depth, nuanced understanding of the field or issue you’re exploring.
This deeper understanding can greatly enhance your overall critique of the text you’re looking at. As such, learning to identify both similarities and differences is an essential skill for effective critical analysis.
My favorite tool for identifying similarities and differences is a Venn Diagram:
To use a venn diagram, title each circle for two different texts. Then, place similarities in the overlapping area of the circles, while unique characteristics (differences) of each text in the non-overlapping parts.
Identifying oversights entails pointing out what the author missed, overlooked, or neglected in their work.
Almost every written work, no matter the expertise or meticulousness of the author, contains oversights. These omissions can be absent-minded mistakes or gaps in the argument, stemming from a lack of knowledge, foresight, or attentiveness.
Such gaps can be found in:
By shining a light on these weaknesses, you increase the depth and breadth of your critical analysis. It helps you to estimate the full worth of the text, understand its limitations, and contextualize it within the broader landscape of related work. Ultimately, noticing these oversights helps to make your analysis more balanced and considerate of the full complexity of the topic at hand.
You may notice here that identifying oversights requires you to already have a broad understanding and knowledge of the topic in the first place – so, study up!
Fact-checking refers to the process of meticulously verifying the truth and accuracy of the data, statements, or claims put forward in a text.
Fact-checking serves as the bulwark against misinformation, bias, and unsubstantiated claims. It demands thorough research, resourcefulness, and a keen eye for detail.
Fact-checking goes beyond surface-level assertions:
If you identify factual errors, it’s vital to highlight them when critically analyzing the text. But remember, you could also (after careful scrutiny) also highlight that the text appears to be factually correct – that, too, is critical analysis.
Exploring counterexamples involves searching and presenting instances or cases which contradict the arguments or conclusions presented in a text.
Counterexamples are an effective way to challenge the generalizations, assumptions or conclusions made in an article or theory. They can reveal weaknesses or oversights in the logic or validity of the author’s perspective.
Considerations in counterexample analysis are:
Exploring counterexamples enriches your critical analysis by injecting an extra layer of scrutiny, and even doubt, in the text.
By presenting counterexamples, you not only test the resilience and validity of the text but also open up new avenues of discussion and investigation that can further your understanding of the topic.
See Also: Counterargument Examples
Assessing methodologies entails examining the techniques, tools, or procedures employed by the author to collect, analyze and present their information.
The accuracy and validity of a text’s conclusions often depend on the credibility and appropriateness of the methodologies used.
Aspects to inspect include:
One strategy you could implement here is to consider a range of other methodologies the author could have used. If the author conducted interviews, consider questioning why they didn’t use broad surveys that could have presented more quantitative findings. If they only interviewed people with one perspective, consider questioning why they didn’t interview a wider variety of people, etc.
See Also: A List of Research Methodologies
Exploring alternative explanations refers to the practice of proposing differing or opposing ideas to those put forward in the text.
An underlying assumption in any analysis is that there may be multiple valid perspectives on a single topic. The text you’re analyzing might provide one perspective, but your job is to bring into the light other reasonable explanations or interpretations.
Cultivating alternative explanations often involves:
Searching for alternative explanations challenges the authority of a singular narrative or perspective, fostering an environment ripe for intellectual discourse and critical thinking . It nudges you to examine the topic from multiple angles, enhancing your understanding and appreciation of the complexity inherent in the field.
Benjamin Bloom placed analysis as the third-highest form of thinking on his ladder of cognitive skills called Bloom’s Taxonomy .
This taxonomy starts with the lowest levels of thinking – remembering and understanding. The further we go up the ladder, the more we reach higher-order thinking skills that demonstrate depth of understanding and knowledge, as outlined below:
Here’s a full outline of the taxonomy in a table format:
Level (Shallow to Deep) | Description | Examples |
---|---|---|
Retain and recall information | Reiterate, memorize, duplicate, repeat, identify | |
Grasp the meaning of something | Explain, paraphrase, report, describe, summarize | |
Use existing knowledge in new contexts | Practice, calculate, implement, operate, use, illustrate | |
Explore relationships, causes, and connections | Compare, contrast, categorize, organize, distinguish | |
Make judgments based on sound analysis | Assess, judge, defend, prioritize, , recommend | |
Use existing information to make something new | Invent, develop, design, compose, generate, construct |
THANK YOU, THANK YOU, THANK YOU! – I cannot even being to explain how hard it has been to find a simple but in-depth understanding of what ‘Critical Analysis’ is. I have looked at over 10 different pages and went down so many rabbit holes but this is brilliant! I only skimmed through the article but it was already promising, I then went back and read it more in-depth, it just all clicked into place. So thank you again!
You’re welcome – so glad it was helpful.
Your email address will not be published. Required fields are marked *
https://doi.org/10.1136/bmjebm-2018-111132
Request permissions.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Qualitative evidence allows researchers to analyse human experience and provides useful exploratory insights into experiential matters and meaning, often explaining the ‘how’ and ‘why’. As we have argued previously 1 , qualitative research has an important place within evidence-based healthcare, contributing to among other things policy on patient safety, 2 prescribing, 3 4 and understanding chronic illness. 5 Equally, it offers additional insight into quantitative studies, explaining contextual factors surrounding a successful intervention or why an intervention might have ‘failed’ or ‘succeeded’ where effect sizes cannot. It is for these reasons that the MRC strongly recommends including qualitative evaluations when developing and evaluating complex interventions. 6
Is it necessary.
Although the importance of qualitative research to improve health services and care is now increasingly widely supported (discussed in paper 1), the role of appraising the quality of qualitative health research is still debated. 8 10 Despite a large body of literature focusing on appraisal and rigour, 9 11–15 often referred to as ‘trustworthiness’ 16 in qualitative research, there remains debate about how to —and even whether to—critically appraise qualitative research. 8–10 17–19 However, if we are to make a case for qualitative research as integral to evidence-based healthcare, then any argument to omit a crucial element of evidence-based practice is difficult to justify. That being said, simply applying the standards of rigour used to appraise studies based on the positivist paradigm (Positivism depends on quantifiable observations to test hypotheses and assumes that the researcher is independent of the study. Research situated within a positivist paradigm isbased purely on facts and consider the world to be external and objective and is concerned with validity, reliability and generalisability as measures of rigour.) would be misplaced given the different epistemological underpinnings of the two types of data.
Given its scope and its place within health research, the robust and systematic appraisal of qualitative research to assess its trustworthiness is as paramount to its implementation in clinical practice as any other type of research. It is important to appraise different qualitative studies in relation to the specific methodology used because the methodological approach is linked to the ‘outcome’ of the research (eg, theory development, phenomenological understandings and credibility of findings). Moreover, appraisal needs to go beyond merely describing the specific details of the methods used (eg, how data were collected and analysed), with additional focus needed on the overarching research design and its appropriateness in accordance with the study remit and objectives.
Poorly conducted qualitative research has been described as ‘worthless, becomes fiction and loses its utility’. 20 However, without a deep understanding of concepts of quality in qualitative research or at least an appropriate means to assess its quality, good qualitative research also risks being dismissed, particularly in the context of evidence-based healthcare where end users may not be well versed in this paradigm.
Appraising the quality of qualitative research is not a new concept—there are a number of published appraisal tools, frameworks and checklists in existence. 21–23 An important and often overlooked point is the confusion between tools designed for appraising methodological quality and reporting guidelines designed to assess the quality of methods reporting. An example is the Consolidate Criteria for Reporting Qualitative Research (COREQ) 24 checklist, which was designed to provide standards for authors when reporting qualitative research but is often mistaken for a methods appraisal tool. 10
Broadly speaking there are two types of critical appraisal approaches for qualitative research: checklists and frameworks. Checklists have often been criticised for confusing quality in qualitative research with ‘technical fixes’ 21 25 , resulting in the erroneous prioritisation of particular aspects of methodological processes over others (eg, multiple coding and triangulation). It could be argued that a checklist approach adopts the positivist paradigm, where the focus is on objectively assessing ‘quality’ where the assumptions is that the researcher is independent of the research conducted. This may result in the application of quantitative understandings of bias in order to judge aspects of recruitment, sampling, data collection and analysis in qualitative research papers. One of the most widely used appraisal tools is the Critical Appraisal Skills Programme (CASP) 26 and along with the JBI QARI (Joanna Briggs Institute Qualitative Assessment and Assessment Instrument) 27 presents examples which tend to mimic the quantitative approach to appraisal. The CASP qualitative tool follows that of other CASP appraisal tools for quantitative research designs developed in the 1990s. The similarities are therefore unsurprising given the status of qualitative research at that time.
Frameworks focus on the overarching concepts of quality in qualitative research, including transparency, reflexivity, dependability and transferability (see box 1 ). 11–13 15 16 20 28 However, unless the reader is familiar with these concepts—their meaning and impact, and how to interpret them—they will have difficulty applying them when critically appraising a paper.
The main issue concerning currently available checklist and framework appraisal methods is that they take a broad brush approach to ‘qualitative’ research as whole, with few, if any, sufficiently differentiating between the different methodological approaches (eg, Grounded Theory, Interpretative Phenomenology, Discourse Analysis) nor different methods of data collection (interviewing, focus groups and observations). In this sense, it is akin to taking the entire field of ‘quantitative’ study designs and applying a single method or tool for their quality appraisal. In the case of qualitative research, checklists, therefore, offer only a blunt and arguably ineffective tool and potentially promote an incomplete understanding of good ‘quality’ in qualitative research. Likewise, current framework methods do not take into account how concepts differ in their application across the variety of qualitative approaches and, like checklists, they also do not differentiate between different qualitative methodologies.
Current approaches to the appraisal of the methodological rigour of the differing types of qualitative research converge towards checklists or frameworks. More importantly, the current tools do not explicitly acknowledge the prejudices that may be present in the different types of qualitative research.
Transferability: the extent to which the presented study allows readers to make connections between the study’s data and wider community settings, ie, transfer conceptual findings to other contexts.
Credibility: extent to which a research account is believable and appropriate, particularly in relation to the stories told by participants and the interpretations made by the researcher.
Reflexivity: refers to the researchers’ engagement of continuous examination and explanation of how they have influenced a research project from choosing a research question to sampling, data collection, analysis and interpretation of data.
Transparency: making explicit the whole research process from sampling strategies, data collection to analysis. The rationale for decisions made is as important as the decisions themselves.
However, we often talk about these concepts in general terms, and it might be helpful to give some explicit examples of how the ‘technical processes’ affect these, for example, partialities related to:
Selection: recruiting participants via gatekeepers, such as healthcare professionals or clinicians, who may select them based on whether they believe them to be ‘good’ participants for interviews/focus groups.
Data collection: poor interview guide with closed questions which encourage yes/no answers and/leading questions.
Reflexivity and transparency: where researchers may focus their analysis on preconceived ideas rather than ground their analysis in the data and do not reflect on the impact of this in a transparent way.
The lack of tailored, method-specific appraisal tools has potentially contributed to the poor uptake and use of qualitative research for informing evidence-based decision making. To improve this situation, we propose the need for more robust quality appraisal tools that explicitly encompass both the core design aspects of all qualitative research (sampling/data collection/analysis) but also considered the specific partialities that can be presented with different methodological approaches. Such tools might draw on the strengths of current frameworks and checklists while providing users with sufficient understanding of concepts of rigour in relation to the different types of qualitative methods. We provide an outline of such tools in the third and final paper in this series.
As qualitative research becomes ever more embedded in health science research, and in order for that research to have better impact on healthcare decisions, we need to rethink critical appraisal and develop tools that allow differentiated evaluations of the myriad of qualitative methodological approaches rather than continuing to treat qualitative research as a single unified approach.
Contributors VW and DN: conceived the idea for this article. VW: wrote the first draft. AMB and DN: contributed to the final draft. All authors approve the submitted article.
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.
Correction notice This article has been updated since its original publication to include a new reference (reference 1.)
Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser .
Enter the email address you signed up with and we'll email you a reset link.
Ziabur Rahman
British Journal of Sociology
Solomon Tilahun
Marta Costa
Forum Qualitative Sozialforschung Forum Qualitative Social Research
Margrit Schreier
M.Lib.I.Sc. Project, Panjab University, under guidance of Dr. Shiv Kumar
SUBHAJIT PANDA
There's no hard and fast rule for qualitative versus quantitative research, and it's often taken for granted. It is claimed here that the divide between qualitative and quantitative research is ambiguous, incoherent, and hence of little value, and that its widespread use could have negative implications. This conclusion is supported by a variety of arguments. Qualitative researchers, for example, have varying perspectives on fundamental problems (such as the use of quantification and causal analysis), which makes the difference as such shaky. In addition, many elements of qualitative and quantitative research overlap significantly, making it difficult to distinguish between the two. Practically in the case of field research, the Qualitative and quantitative approach can't be distinguished clearly as the study pointed. The distinction may limit innovation in the development of new research methodologies, as well as cause complication and wasteful activity. As a general rule, it may be desirable not to conceptualise research approaches at such abstract levels as are done in the context of qualitative or quantitative methodologies. Discussions of the benefits and drawbacks of various research methods, rather than general research questions, are recommended.
David Collier
Qualitative Methods provides a fitting occasion to reflect on this branch of methodology.1 Given that the other APSA organized section concerned with methodology2 is centrally fo-cused on quantitative methods, the additional issue arises of the relationship between the qualitative and quantitative traditions. Adopting a pragmatic approach to choices about concepts (Collier and Adcock 1999), we believe that the task here is not to seek the "true" meaning of the qualitative-quantitative distinction. Rather, the challenge is to use this distinction to focus on similarities and contrasts in research practices that pro
Kezang sherab
Zeinab NasserEddine
Dr. Mustapha Kulungu
UNICAF University - Zambia
Ivan Steenkamp
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
Oriol Iglesias
Munyaradzi Moyo
International Journal of Social Research Methodology
Julia Brannen
International Journal of Value-based Management
Deborah Brazeal
Proceedings of the 4th European …
Sarah Kaplan
IRJET Journal
Marsyangdi Journal
Basanta Kandel, PhD
Robert Eppel
Educational Researcher
William Firestone
Gregoire Nleme
Research on Humanities and Social Sciences
Brian Mumba
Qualitative & Multi-Method Research
Hein Goemans
Journal of Evidence Based Medicine and Healthcare
Anshika Srivastava
Methodological Innovations Online
Christine Welch
Nguyen Tran
Rajib Timalsina
Social Science Research Network
New Directions for Program Evaluation
Sharon Rallis
Katrin Niglas
Evaluation and Program Planning
Souraya Sidani
THE BERKSHIRE ENCYCLOPEDIA OF SUSTAINABILITY: MEASUREMENTS, INDICATORS, AND RESEARCH METHODS FOR SUSTAINABILITY
Gunilla Oberg
amelya herda losari
Pertti Alasuutari
Journal of Research in Nursing
Home » Discourse Analysis – Methods, Types and Examples
Table of Contents
Definition:
Discourse Analysis is a method of studying how people use language in different situations to understand what they really mean and what messages they are sending. It helps us understand how language is used to create social relationships and cultural norms.
It examines language use in various forms of communication such as spoken, written, visual or multi-modal texts, and focuses on how language is used to construct social meaning and relationships, and how it reflects and reinforces power dynamics, ideologies, and cultural norms.
Some of the most common types of discourse analysis are:
This type of discourse analysis focuses on analyzing the structure of talk and how participants in a conversation make meaning through their interaction. It is often used to study face-to-face interactions, such as interviews or everyday conversations.
This approach focuses on the ways in which language use reflects and reinforces power relations, social hierarchies, and ideologies. It is often used to analyze media texts or political speeches, with the aim of uncovering the hidden meanings and assumptions that are embedded in these texts.
This type of discourse analysis focuses on the ways in which language use is related to psychological processes such as identity construction and attribution of motives. It is often used to study narratives or personal accounts, with the aim of understanding how individuals make sense of their experiences.
This approach focuses on analyzing not only language use, but also other modes of communication, such as images, gestures, and layout. It is often used to study digital or visual media, with the aim of understanding how different modes of communication work together to create meaning.
This type of discourse analysis uses large collections of texts, or corpora, to analyze patterns of language use across different genres or contexts. It is often used to study language use in specific domains, such as academic writing or legal discourse.
This type of discourse analysis aims to describe the features and characteristics of language use, without making any value judgments or interpretations. It is often used in linguistic studies to describe grammatical structures or phonetic features of language.
This approach focuses on analyzing the structure and content of stories or narratives, with the aim of understanding how they are constructed and how they shape our understanding of the world. It is often used to study personal narratives or cultural myths.
This type of discourse analysis is used to study texts that explain or describe a concept, process, or idea. It aims to understand how information is organized and presented in such texts and how it influences the reader’s understanding of the topic.
This approach focuses on analyzing texts that present an argument or attempt to persuade the reader or listener. It aims to understand how the argument is constructed, what strategies are used to persuade, and how the audience is likely to respond to the argument.
Here is a step-by-step guide for conducting discourse analysis:
Here are some of the key areas where discourse analysis is commonly used:
Discourse analysis is a valuable research methodology that can be used in a variety of contexts. Here are some situations where discourse analysis may be particularly useful:
Here are some examples of discourse analysis in action:
The purpose of discourse analysis is to examine the ways in which language is used to construct social meaning, relationships, and power relations. By analyzing language use in a systematic and rigorous way, discourse analysis can provide valuable insights into the social and cultural factors that shape communication and interaction.
The specific purposes of discourse analysis may vary depending on the research context, but some common goals include:
Here are some key characteristics of discourse analysis:
Discourse analysis has several advantages as a methodological approach. Here are some of the main advantages:
Some Limitations of Discourse Analysis are as follows:
Researcher, Academic Writer, Web developer
Implementation Science volume 19 , Article number: 59 ( 2024 ) Cite this article
179 Accesses
5 Altmetric
Metrics details
The implementation of clinical practice guidelines (CPGs) is a cyclical process in which the evaluation stage can facilitate continuous improvement. Implementation science has utilized theoretical approaches, such as models and frameworks, to understand and address this process. This article aims to provide a comprehensive overview of the models and frameworks used to assess the implementation of CPGs.
A systematic review was conducted following the Cochrane methodology, with adaptations to the "selection process" due to the unique nature of this review. The findings were reported following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) reporting guidelines. Electronic databases were searched from their inception until May 15, 2023. A predetermined strategy and manual searches were conducted to identify relevant documents from health institutions worldwide. Eligible studies presented models and frameworks for assessing the implementation of CPGs. Information on the characteristics of the documents, the context in which the models were used (specific objectives, level of use, type of health service, target group), and the characteristics of each model or framework (name, domain evaluated, and model limitations) were extracted. The domains of the models were analyzed according to the key constructs: strategies, context, outcomes, fidelity, adaptation, sustainability, process, and intervention. A subgroup analysis was performed grouping models and frameworks according to their levels of use (clinical, organizational, and policy) and type of health service (community, ambulatorial, hospital, institutional). The JBI’s critical appraisal tools were utilized by two independent researchers to assess the trustworthiness, relevance, and results of the included studies.
Database searches yielded 14,395 studies, of which 80 full texts were reviewed. Eight studies were included in the data analysis and four methodological guidelines were additionally included from the manual search. The risk of bias in the studies was considered non-critical for the results of this systematic review. A total of ten models/frameworks for assessing the implementation of CPGs were found. The level of use was mainly policy, the most common type of health service was institutional, and the major target group was professionals directly involved in clinical practice. The evaluated domains differed between the models and there were also differences in their conceptualization. All the models addressed the domain "Context", especially at the micro level (8/12), followed by the multilevel (7/12). The domains "Outcome" (9/12), "Intervention" (8/12), "Strategies" (7/12), and "Process" (5/12) were frequently addressed, while "Sustainability" was found only in one study, and "Fidelity/Adaptation" was not observed.
The use of models and frameworks for assessing the implementation of CPGs is still incipient. This systematic review may help stakeholders choose or adapt the most appropriate model or framework to assess CPGs implementation based on their specific health context.
PROSPERO (International Prospective Register of Systematic Reviews) registration number: CRD42022335884. Registered on June 7, 2022.
Peer Review reports
Although the number of theoretical approaches has grown in recent years, there are still important gaps to be explored in the use of models and frameworks to assess the implementation of clinical practice guidelines (CPGs). This systematic review aims to contribute knowledge to overcome these gaps.
Despite the great advances in implementation science, evaluating the implementation of CPGs remains a challenge, and models and frameworks could support improvements in this field.
This study demonstrates that the available models and frameworks do not cover all characteristics and domains necessary for a complete evaluation of CPGs implementation.
The presented findings contribute to the field of implementation science, encouraging debate on choices and adaptations of models and frameworks for implementation research and evaluation.
Substantial investments have been made in clinical research and development in recent decades, increasing the medical knowledge base and the availability of health technologies [ 1 ]. The use of clinical practice guidelines (CPGs) has increased worldwide to guide best health practices and to maximize healthcare investments. A CPG can be defined as "any formal statements systematically developed to assist practitioner and patient decisions about appropriate health care for specific clinical circumstances" [ 2 ] and has the potential to improve patient care by promoting interventions of proven benefit and discouraging ineffective interventions. Furthermore, they can promote efficiency in resource allocation and provide support for managers and health professionals in decision-making [ 3 , 4 ].
However, having a quality CPG does not guarantee that the expected health benefits will be obtained. In fact, putting these devices to use still presents a challenge for most health services across distinct levels of government. In addition to the development of guidelines with high methodological rigor, those recommendations need to be available to their users; these recommendations involve the diffusion and dissemination stages, and they need to be used in clinical practice (implemented), which usually requires behavioral changes and appropriate resources and infrastructure. All these stages involve an iterative and complex process called implementation, which is defined as the process of putting new practices within a setting into use [ 5 , 6 ].
Implementation is a cyclical process, and the evaluation is one of its key stages, which allows continuous improvement of CPGs development and implementation strategies. It consists of verifying whether clinical practice is being performed as recommended (process evaluation or formative evaluation) and whether the expected results and impact are being reached (summative evaluation) [ 7 , 8 , 9 ]. Although the importance of the implementation evaluation stage has been recognized, research on how these guidelines are implemented is scarce [ 10 ]. This paper focused on the process of assessing CPGs implementation.
To understand and improve this complex process, implementation science provides a systematic set of principles and methods to integrate research findings and other evidence-based practices into routine practice and improve the quality and effectiveness of health services and care [ 11 ]. The field of implementation science uses theoretical approaches that have varying degrees of specificity based on the current state of knowledge and are structured based on theories, models, and frameworks [ 5 , 12 , 13 ]. A "Model" is defined as "a simplified depiction of a more complex world with relatively precise assumptions about cause and effect", and a "framework" is defined as "a broad set of constructs that organize concepts and data descriptively without specifying causal relationships" [ 9 ]. Although these concepts are distinct, in this paper, their use will be interchangeable, as they are typically like checklists of factors relevant to various aspects of implementation.
There are a variety of theoretical approaches available in implementation science [ 5 , 14 ], which can make choosing the most appropriate challenging [ 5 ]. Some models and frameworks have been categorized as "evaluation models" by providing a structure for evaluating implementation endeavors [ 15 ], even though theoretical approaches from other categories can also be applied for evaluation purposes because they specify concepts and constructs that may be operationalized and measured [ 13 ]. Two frameworks that can specify implementation aspects that should be evaluated as part of intervention studies are RE-AIM (Reach, Effectiveness, Adoption, Implementation, Maintenance) [ 16 ] and PRECEDE-PROCEED (Predisposing, Reinforcing and Enabling Constructs in Educational Diagnosis and Evaluation-Policy, Regulatory, and Organizational Constructs in Educational and Environmental Development) [ 17 ]. Although the number of theoretical approaches has grown in recent years, the use of models and frameworks to evaluate the implementation of guidelines still seems to be a challenge.
This article aims to provide a complete map of the models and frameworks applied to assess the implementation of CPGs. The aim is also to subside debate and choices on models and frameworks for the research and evaluation of the implementation processes of CPGs and thus to facilitate the continued development of the field of implementation as well as to contribute to healthcare policy and practice.
A systematic review was conducted following the Cochrane methodology [ 18 ], with adaptations to the "selection process" due to the unique nature of this review (details can be found in the respective section). The review protocol was registered in PROSPERO (registration number: CRD42022335884) on June 7, 2022. This report adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [ 19 ] and a completed checklist is provided in Additional File 1.
The SDMO approach (Types of Studies, Types of Data, Types of Methods, Outcomes) [ 20 ] was utilized in this systematic review, outlined as follows:
All types of studies were considered for inclusion, as the assessment of CPG implementation can benefit from a diverse range of study designs, including randomized clinical trials/experimental studies, scale/tool development, systematic reviews, opinion pieces, qualitative studies, peer-reviewed articles, books, reports, and unpublished theses.
Studies were categorized based on their methodological designs, which guided the synthesis, risk of bias assessment, and presentation of results.
Study protocols and conference abstracts were excluded due to insufficient information for this review.
Studies that evaluated the implementation of CPGs either independently or as part of a multifaceted intervention.
Guidelines for evaluating CPG implementation.
Inclusion of CPGs related to any context, clinical area, intervention, and patient characteristics.
No restrictions were placed on publication date or language.
Exclusion criteria
General guidelines were excluded, as this review focused on 'models for evaluating clinical practice guidelines implementation' rather than the guidelines themselves.
Studies that focused solely on implementation determinants as barriers and enablers were excluded, as this review aimed to explore comprehensive models/frameworks.
Studies evaluating programs and policies were excluded.
Studies that only assessed implementation strategies (isolated actions) rather than the implementation process itself were excluded.
Studies that focused solely on the impact or results of implementation (summative evaluation) were excluded.
Not applicable.
All potential models or frameworks for assessing the implementation of CPG (evaluation models/frameworks), as well as their characteristics: name; specific objectives; levels of use (clinical, organizational, and policy); health system (public, private, or both); type of health service (community, ambulatorial, hospital, institutional, homecare); domains or outcomes evaluated; type of recommendation evaluated; context; limitations of the model.
Model was defined as "a deliberated simplification of a phenomenon on a specific aspect" [ 21 ].
Framework was defined as "structure, overview outline, system, or plan consisting of various descriptive categories" [ 21 ].
Models or frameworks used solely for the CPG development, dissemination, or implementation phase.
Models/frameworks used solely for assessment processes other than implementation, such as for the development or dissemination phase.
The systematic search was conducted on July 31, 2022 (and updated on May 15, 2023) in the following electronic databases: MEDLINE/PubMed, Centre for Reviews and Dissemination (CRD), the Cochrane Library, Cumulative Index to Nursing and Allied Health Literature (CINAHL), EMBASE, Epistemonikos, Global Health, Health Systems Evidence, PDQ-Evidence, PsycINFO, Rx for Change (Canadian Agency for Drugs and Technologies in Health, CADTH), Scopus, Web of Science and Virtual Health Library (VHL). The Google Scholar database was used for the manual selection of studies (first 10 pages).
Additionally, hand searches were performed on the lists of references included in the systematic reviews and citations of the included studies, as well as on the websites of institutions working on CPGs development and implementation: Guidelines International Networks (GIN), National Institute for Health and Care Excellence (NICE; United Kingdom), World Health Organization (WHO), Centers for Disease Control and Prevention (CDC; USA), Institute of Medicine (IOM; USA), Australian Department of Health and Aged Care (ADH), Healthcare Improvement Scotland (SIGN), National Health and Medical Research Council (NHMRC; Australia), Queensland Health, The Joanna Briggs Institute (JBI), Ministry of Health and Social Policy of Spain, Ministry of Health of Brazil and Capes Theses and Dissertations Catalog.
The search strategy combined terms related to "clinical practice guidelines" (practice guidelines, practice guidelines as topic, clinical protocols), "implementation", "assessment" (assessment, evaluation), and "models, framework". The free term "monitoring" was not used because it was regularly related to clinical monitoring and not to implementation monitoring. The search strategies adapted for the electronic databases are presented in an additional file (see Additional file 2).
The results of the literature search from scientific databases, excluding the CRD database, were imported into Mendeley Reference Management software to remove duplicates. They were then transferred to the Rayyan platform ( https://rayyan.qcri.org ) [ 22 ] for the screening process. Initially, studies related to the "assessment of implementation of the CPG" were selected. The titles were first screened independently by two pairs of reviewers (first selection: four reviewers, NM, JB, SS, and JG; update: a pair of reviewers, NM and DG). The title screening was broad, including all potentially relevant studies on CPG and the implementation process. Following that, the abstracts were independently screened by the same group of reviewers. The abstract screening was more focused, specifically selecting studies that addressed CPG and the evaluation of the implementation process. In the next step, full-text articles were reviewed independently by a pair of reviewers (NM, DG) to identify those that explicitly presented "models" or "frameworks" for assessing the implementation of the CPG. Disagreements regarding the eligibility of studies were resolved through discussion and consensus, and by a third reviewer (JB) when necessary. One reviewer (NM) conducted manual searches, and the inclusion of documents was discussed with the other reviewers.
The selected studies were independently classified and evaluated according to their methodological designs by two investigators (NM and JG). This review employed JBI’s critical appraisal tools to assess the trustworthiness, relevance and results of the included studies [ 23 ] and these tools are presented in additional files (see Additional file 3 and Additional file 4). Disagreements were resolved by consensus or consultation with the other reviewers. Methodological guidelines and noncomparative and before–after studies were not evaluated because JBI does not have specific tools for assessing these types of documents. Although the studies were assessed for quality, they were not excluded on this basis.
The data was independently extracted by two reviewers (NM, DG) using a Microsoft Excel spreadsheet. Discrepancies were discussed and resolved by consensus. The following information was extracted:
Document characteristics : author; year of publication; title; study design; instrument of evaluation; country; guideline context;
Usage context of the models : specific objectives; level of use (clinical, organizational, and policy); type of health service (community, ambulatorial, hospital, institutional); target group (guideline developers, clinicians; health professionals; health-policy decision-makers; health-care organizations; service managers);
Model and framework characteristics : name, domain evaluated, and model limitations.
The set of information to be extracted, shown in the systematic review protocol, was adjusted to improve the organization of the analysis.
The "level of use" refers to the scope of the model used. "Clinical" was considered when the evaluation focused on individual practices, "organizational" when practices were within a health service institution, and "policy" when the evaluation was more systemic and covered different health services or institutions.
The "type of health service" indicated the category of health service where the model/framework was used (or can be used) to assess the implementation of the CPG, related to the complexity of healthcare. "Community" is related to primary health care; "ambulatorial" is related to secondary health care; "hospital" is related to tertiary health care; and "institutional" represented models/frameworks not specific to a particular type of health service.
The "target group" included stakeholders related to the use of the model/framework for evaluating the implementation of the CPG, such as clinicians, health professionals, guideline developers, health policy-makers, health organizations, and service managers.
The category "health system" (public, private, or both) mentioned in the systematic review protocol was not found in the literature obtained and was removed as an extraction variable. Similarly, the variables "type of recommendation evaluated" and "context" were grouped because the same information was included in the "guideline context" section of the study.
Some selected documents presented models or frameworks recognized by the scientific field, including some that were validated. However, some studies adapted the model to this context. Therefore, the domain analysis covered all models or frameworks domains evaluated by (or suggested for evaluation by) the document analyzed.
The results were tabulated using narrative synthesis with an aggregative approach, without meta-analysis, aiming to summarize the documents descriptively for the organization, description, interpretation and explanation of the study findings [ 24 , 25 ].
The model/framework domains evaluated in each document were studied according to Nilsen et al.’s constructs: "strategies", "context", "outcomes", "fidelity", "adaptation" and "sustainability". For this study, "strategies" were described as structured and planned initiatives used to enhance the implementation of clinical practice [ 26 ].
The definition of "context" varies in the literature. Despite that, this review considered it as the set of circumstances or factors surrounding a particular implementation effort, such as organizational support, financial resources, social relations and support, leadership, and organizational culture [ 26 , 27 ]. The domain "context" was subdivided according to the level of health care into "micro" (individual perspective), "meso" (organizational perspective), "macro" (systemic perspective), and "multiple" (when there is an issue involving more than one level of health care).
The "outcomes" domain was related to the results of the implementation process (unlike clinical outcomes) and was stratified according to the following constructs: acceptability, appropriateness, feasibility, adoption, cost, and penetration. All these concepts align with the definitions of Proctor et al. (2011), although we decided to separate "fidelity" and "sustainability" as independent domains similar to Nilsen [ 26 , 28 ].
"Fidelity" and "adaptation" were considered the same domain, as they are complementary pieces of the same issue. In this study, implementation fidelity refers to how closely guidelines are followed as intended by their developers or designers. On the other hand, adaptation involves making changes to the content or delivery of a guideline to better fit the needs of a specific context. The "sustainability" domain was defined as evaluations about the continuation or permanence over time of the CPG implementation.
Additionally, the domain "process" was utilized to address issues related to the implementation process itself, rather than focusing solely on the outcomes of the implementation process, as done by Wang et al. [ 14 ]. Furthermore, the "intervention" domain was introduced to distinguish aspects related to the CPG characteristics that can impact its implementation, such as the complexity of the recommendation.
A subgroup analysis was performed with models and frameworks categorized based on their levels of use (clinical, organizational, and policy) and the type of health service (community, ambulatorial, hospital, institutional) associated with the CPG. The goal is to assist stakeholders (politicians, clinicians, researchers, or others) in selecting the most suitable model for evaluating CPG implementation based on their specific health context.
Database searches yielded 26,011 studies, of which 107 full texts were reviewed. During the full-text review, 99 articles were excluded: 41 studies did not mention a model or framework for assessing the implementation of the CPG, 31 studies evaluated only implementation strategies (isolated actions) rather than the implementation process itself, and 27 articles were not related to the implementation assessment. Therefore, eight studies were included in the data analysis. The updated search did not reveal additional relevant studies. The main reason for study exclusion was that they did not use models or frameworks to assess CPG implementation. Additionally, four methodological guidelines were included from the manual search (Fig. 1 ).
PRISMA diagram. Acronyms: ADH—Australian Department of Health, CINAHL—Cumulative Index to Nursing and Allied Health Literature, CDC—Centers for Disease Control and Prevention, CRD—Centre for Reviews and Dissemination, GIN—Guidelines International Networks, HSE—Health Systems Evidence, IOM—Institute of Medicine, JBI—The Joanna Briggs Institute, MHB—Ministry of Health of Brazil, NICE—National Institute for Health and Care Excellence, NHMRC—National Health and Medical Research Council, MSPS – Ministerio de Sanidad Y Política Social (Spain), SIGN—Scottish Intercollegiate Guidelines Network, VHL – Virtual Health Library, WHO—World Health Organization. Legend: Reason A –The study evaluated only implementation strategies (isolated actions) rather than the implementation process itself. Reason B – The study did not mention a model or framework for assessing the implementation of the intervention. Reason C – The study was not related to the implementation assessment. Adapted from Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 2021;372:n71. https://doi.org/10.1136/bmj.n71 . For more information, visit:
According to the JBI’s critical appraisal tools, the overall assessment of the studies indicates their acceptance for the systematic review.
The cross-sectional studies lacked clear information regarding "confounding factors" or "strategies to address confounding factors". This was understandable given the nature of the study, where such details are not typically included. However, the reviewers did not find this lack of information to be critical, allowing the studies to be included in the review. The results of this methodological quality assessment can be found in an additional file (see Additional file 5).
In the qualitative studies, there was some ambiguity regarding the questions: "Is there a statement locating the researcher culturally or theoretically?" and "Is the influence of the researcher on the research, and vice versa, addressed?". However, the reviewers decided to include the studies and deemed the methodological quality sufficient for the analysis in this article, based on the other information analyzed. The results of this methodological quality assessment can be found in an additional file (see Additional file 6).
The documents were directed to several continents: Australia/Oceania (4/12) [ 31 , 33 , 36 , 37 ], North America (4/12 [ 30 , 32 , 38 , 39 ], Europe (2/12 [ 29 , 35 ] and Asia (2/12) [ 34 , 40 ]. The types of documents were classified as cross-sectional studies (4/12) [ 29 , 32 , 34 , 38 ], methodological guidelines (4/12) [ 33 , 35 , 36 , 37 ], mixed methods studies (3/12) [ 30 , 31 , 39 ] or noncomparative studies (1/12) [ 40 ]. In terms of the instrument of evaluation, most of the documents used a survey/questionnaire (6/12) [ 29 , 30 , 31 , 32 , 34 , 38 ], while three (3/12) used qualitative instruments (interviews, group discussions) [ 30 , 31 , 39 ], one used a checklist [ 37 ], one used an audit [ 33 ] and three (3/12) did not define a specific instrument to measure [ 35 , 36 , 40 ].
Considering the clinical areas covered, most studies evaluated the implementation of nonspecific (general) clinical areas [ 29 , 33 , 35 , 36 , 37 , 40 ]. However, some studies focused on specific clinical contexts, such as mental health [ 32 , 38 ], oncology [ 39 ], fall prevention [ 31 ], spinal cord injury [ 30 ], and sexually transmitted infections [ 34 ].
Specific objectives.
All the studies highlighted the purpose of guiding the process of evaluating the implementation of CPGs, even if they evaluated CPGs from generic or different clinical areas.
The most common level of use of the models/frameworks identified to assess the implementation of CPGs was policy (6/12) [ 33 , 35 , 36 , 37 , 39 , 40 ]. In this level, the model is used in a systematic way to evaluate all the processes involved in CPGs implementation and is primarily related to methodological guidelines. This was followed by the organizational level of use (5/12) [ 30 , 31 , 32 , 38 , 39 ], where the model is used to evaluate the implementation of CPGs in a specific institution, considering its specific environment. Finally, the clinical level of use (2/12) [ 29 , 34 ] focuses on individual practice and the factors that can influence the implementation of CPGs by professionals.
Institutional services were predominant (5/12) [ 33 , 35 , 36 , 37 , 40 ] and included methodological guidelines and a study of model development and validation. Hospitals were the second most common type of health service (4/12) [ 29 , 30 , 31 , 34 ], followed by ambulatorial (2/12) [ 32 , 34 ] and community health services (1/12) [ 32 ]. Two studies did not specify which type of health service the assessment addressed [ 38 , 39 ].
The focus of the target group was professionals directly involved in clinical practice (6/12) [ 29 , 31 , 32 , 34 , 38 , 40 ], namely, health professionals and clinicians. Other less related stakeholders included guideline developers (2/12) [ 39 , 40 ], health policy decision makers (1/12) [ 39 ], and healthcare organizations (1/12) [ 39 ]. The target group was not defined in the methodological guidelines, although all the mentioned stakeholders could be related to these documents.
Models and frameworks for assessing the implementation of cpgs.
The Consolidated Framework for Implementation Research (CFIR) [ 31 , 38 ] and the Promoting Action on Research Implementation in Health Systems (PARiHS) framework [ 29 , 30 ] were the most commonly employed frameworks within the selected documents. The other models mentioned were: Goal commitment and implementation of practice guidelines framework [ 32 ]; Guideline to identify key indicators [ 35 ]; Guideline implementation checklist [ 37 ]; Guideline implementation evaluation tool [ 40 ]; JBI Implementation Framework [ 33 ]; Reach, effectiveness, adoption, implementation and maintenance (RE-AIM) framework [ 34 ]; The Guideline Implementability Framework [ 39 ] and an unnamed model [ 36 ].
The number of domains evaluated (or suggested for evaluation) by the documents varied between three and five, with the majority focusing on three domains. All the models addressed the domain "context", with a particular emphasis on the micro level of the health care context (8/12) [ 29 , 31 , 34 , 35 , 36 , 37 , 38 , 39 ], followed by the multilevel (7/12) [ 29 , 31 , 32 , 33 , 38 , 39 , 40 ], meso level (4/12) [ 30 , 35 , 39 , 40 ] and macro level (2/12) [ 37 , 39 ]. The "Outcome" domain was evaluated in nine models. Within this domain, the most frequently evaluated subdomain was "adoption" (6/12) [ 29 , 32 , 34 , 35 , 36 , 37 ], followed by "acceptability" (4/12) [ 30 , 32 , 35 , 39 ], "appropriateness" (3/12) [ 32 , 34 , 36 ], "feasibility" (3/12) [ 29 , 32 , 36 ], "cost" (1/12) [ 35 ] and "penetration" (1/12) [ 34 ]. Regarding the other domains, "Intervention" (8/12) [ 29 , 31 , 34 , 35 , 36 , 38 , 39 , 40 ], "Strategies" (7/12) [ 29 , 30 , 33 , 35 , 36 , 37 , 40 ] and "Process" (5/12) [ 29 , 31 , 32 , 33 , 38 ] were frequently addressed in the models, while "Sustainability" (1/12) [ 34 ] was only found in one model, and "Fidelity/Adaptation" was not observed. The domains presented by the models and frameworks and evaluated in the documents are shown in Table 2 .
Only two documents mentioned limitations in the use of the model or frameworks. These two studies reported limitations in the use of CFIR: "is complex and cumbersome and requires tailoring of the key variables to the specific context", and "this framework should be supplemented with other important factors and local features to achieve a sound basis for the planning and realization of an ongoing project" [ 31 , 38 ]. Limitations in the use of other models or frameworks are not reported.
Following the subgroup analysis (Table 3 ), five different models/frameworks were utilized at the policy level by institutional health services. These included the Guideline Implementation Evaluation Tool [ 40 ], the NHMRC tool (model name not defined) [ 36 ], the JBI Implementation Framework + GRiP [ 33 ], Guideline to identify key indicators [ 35 ], and the Guideline implementation checklist [ 37 ]. Additionally, the "Guideline Implementability Framework" [ 39 ] was implemented at the policy level without restrictions based on the type of health service. Regarding the organizational level, the models used varied depending on the type of service. The "Goal commitment and implementation of practice guidelines framework" [ 32 ] was applied in community and ambulatory health services, while "PARiHS" [ 29 , 30 ] and "CFIR" [ 31 , 38 ] were utilized in hospitals. In contexts where the type of health service was not defined, "CFIR" [ 31 , 38 ] and "The Guideline Implementability Framework" [ 39 ] were employed. Lastly, at the clinical level, "RE-AIM" [ 34 ] was utilized in ambulatory and hospital services, and PARiHS [ 29 , 30 ] was specifically used in hospital services.
This systematic review identified 10 models/ frameworks used to assess the implementation of CPGs in various health system contexts. These documents shared similar objectives in utilizing models and frameworks for assessment. The primary level of use was policy, the most common type of health service was institutional, and the main target group of the documents was professionals directly involved in clinical practice. The models and frameworks presented varied analytical domains, with sometimes divergent concepts used in these domains. This study is innovative in its emphasis on the evaluation stage of CPG implementation and in summarizing aspects and domains aimed at the practical application of these models.
The small number of documents contrasts with studies that present an extensive range of models and frameworks available in implementation science. The findings suggest that the use of models and frameworks to evaluate the implementation of CPGs is still in its early stages. Among the selected documents, there was a predominance of cross-sectional studies and methodological guidelines, which strongly influenced how the implementation evaluation was conducted. This was primarily done through surveys/questionnaires, qualitative methods (interviews, group discussions), and non-specific measurement instruments. Regarding the subject areas evaluated, most studies focused on a general clinical area, while others explored different clinical areas. This suggests that the evaluation of CPG implementation has been carried out in various contexts.
The models were chosen independently of the categories proposed in the literature, with their usage categorized for purposes other than implementation evaluation, as is the case with CFIR and PARiHS. This practice was described by Nilsen et al. who suggested that models and frameworks from other categories can also be applied for evaluation purposes because they specify concepts and constructs that may be operationalized and measured [ 14 , 15 , 42 , 43 ].
The results highlight the increased use of models and frameworks in evaluation processes at the policy level and institutional environments, followed by the organizational level in hospital settings. This finding contradicts a review that reported the policy level as an area that was not as well studied [ 44 ]. The use of different models at the institutional level is also emphasized in the subgroup analysis. This may suggest that the greater the impact (social, financial/economic, and organizational) of implementing CPGs, the greater the interest and need to establish well-defined and robust processes. In this context, the evaluation stage stands out as crucial, and the investment of resources and efforts to structure this stage becomes even more advantageous [ 10 , 45 ]. Two studies (16,7%) evaluated the implementation of CPGs at the individual level (clinical level). These studies stand out for their potential to analyze variations in clinical practice in greater depth.
In contrast to the level of use and type of health service most strongly indicated in the documents, with systemic approaches, the target group most observed was professionals directly involved in clinical practice. This suggests an emphasis on evaluating individual behaviors. This same emphasis is observed in the analysis of the models, in which there is a predominance of evaluating the micro level of the health context and the "adoption" subdomain, in contrast with the sub-use of domains such as "cost" and "process". Cassetti et al. observed the same phenomenon in their review, in which studies evaluating the implementation of CPGs mainly adopted a behavioral change approach to tackle those issues, without considering the influence of wider social determinants of health [ 10 ]. However, the literature widely reiterates that multiple factors impact the implementation of CPGs, and different actions are required to make them effective [ 6 , 46 , 47 ]. As a result, there is enormous potential for the development and adaptation of models and frameworks aimed at more systemic evaluation processes that consider institutional and organizational aspects.
In analyzing the model domains, most models focused on evaluating only some aspects of implementation (three domains). All models evaluated the "context", highlighting its significant influence on implementation [ 9 , 26 ]. Context is an essential effect modifier for providing research evidence to guide decisions on implementation strategies [ 48 ]. Contextualizing a guideline involves integrating research or other evidence into a specific circumstance [ 49 ]. The analysis of this domain was adjusted to include all possible contextual aspects, even if they were initially allocated to other domains. Some contextual aspects presented by the models vary in comprehensiveness, such as the assessment of the "timing and nature of stakeholder engagement" [ 39 ], which includes individual engagement by healthcare professionals and organizational involvement in CPG implementation. While the importance of context is universally recognized, its conceptualization and interpretation differ across studies and models. This divergence is also evident in other domains, consistent with existing literature [ 14 ]. Efforts to address this conceptual divergence in implementation science are ongoing, but further research and development are needed in this field [ 26 ].
The main subdomain evaluated was "adoption" within the outcome domain. This may be attributed to the ease of accessing information on the adoption of the CPG, whether through computerized system records, patient records, or self-reports from healthcare professionals or patients themselves. The "acceptability" subdomain pertains to the perception among implementation stakeholders that a particular CPG is agreeable, palatable or satisfactory. On the other hand, "appropriateness" encompasses the perceived fit, relevance or compatibility of the CPG for a specific practice setting, provider, or consumer, or its perceived fit to address a particular issue or problem [ 26 ]. Both subdomains are subjective and rely on stakeholders' interpretations and perceptions of the issue being analyzed, making them susceptible to reporting biases. Moreover, obtaining this information requires direct consultation with stakeholders, which can be challenging for some evaluation processes, particularly in institutional contexts.
The evaluation of the subdomains "feasibility" (the extent to which a CPG can be successfully used or carried out within a given agency or setting), "cost" (the cost impact of an implementation effort), and "penetration" (the extent to which an intervention or treatment is integrated within a service setting and its subsystems) [ 26 ] was rarely observed in the documents. This may be related to the greater complexity of obtaining information on these aspects, as they involve cross-cutting and multifactorial issues. In other words, it would be difficult to gather this information during evaluations with health practitioners as the target group. This highlights the need for evaluation processes of CPGs implementation involving multiple stakeholders, even if the evaluation is adjusted for each of these groups.
Although the models do not establish the "intervention" domain, we thought it pertinent in this study to delimit the issues that are intrinsic to CPGs, such as methodological quality or clarity in establishing recommendations. These issues were quite common in the models evaluated but were considered in other domains (e.g., in "context"). Studies have reported the importance of evaluating these issues intrinsic to CPGs [ 47 , 50 ] and their influence on the implementation process [ 51 ].
The models explicitly present the "strategies" domain, and its evaluation was usually included in the assessments. This is likely due to the expansion of scientific and practical studies in implementation science that involve theoretical approaches to the development and application of interventions to improve the implementation of evidence-based practices. However, these interventions themselves are not guaranteed to be effective, as reported in a previous review that showed unclear results indicating that the strategies had affected successful implementation [ 52 ]. Furthermore, model domains end up not covering all the complexity surrounding the strategies and their development and implementation process. For example, the ‘Guideline implementation evaluation tool’ evaluates whether guideline developers have designed and provided auxiliary tools to promote the implementation of guidelines [ 40 ], but this does not mean that these tools would work as expected.
The "process" domain was identified in the CFIR [ 31 , 38 ], JBI/GRiP [ 33 ], and PARiHS [ 29 ] frameworks. While it may be included in other domains of analysis, its distinct separation is crucial for defining operational issues when assessing the implementation process, such as determining if and how the use of the mentioned CPG was evaluated [ 3 ]. Despite its presence in multiple models, there is still limited detail in the evaluation guidelines, which makes it difficult to operationalize the concept. Further research is needed to better define the "process" domain and its connections and boundaries with other domains.
The domain of "sustainability" was only observed in the RE-AIM framework, which is categorized as an evaluation framework [ 34 ]. In its acronym, the letter M stands for "maintenance" and corresponds to the assessment of whether the user maintains use, typically longer than 6 months. The presence of this domain highlights the need for continuous evaluation of CPGs implementation in the short, medium, and long term. Although the RE-AIM framework includes this domain, it was not used in the questionnaire developed in the study. One probable reason is that the evaluation of CPGs implementation is still conducted on a one-off basis and not as a continuous improvement process. Considering that changes in clinical practices are inherent over time, evaluating and monitoring changes throughout the duration of the CPG could be an important strategy for ensuring its implementation. This is an emerging field that requires additional investment and research.
The "Fidelity/Adaptation" domain was not observed in the models. These emerging concepts involve the extent to which a CPG is being conducted exactly as planned or whether it is undergoing adjustments and adaptations. Whether or not there is fidelity or adaptation in the implementation of CPGs does not presuppose greater or lesser effectiveness; after all, some adaptations may be necessary to implement general CPGs in specific contexts. The absence of this domain in all the models and frameworks may suggest that they are not relevant aspects for evaluating implementation or that there is a lack of knowledge of these complex concepts. This may suggest difficulty in expressing concepts in specific evaluative questions. However, further studies are warranted to determine the comprehensiveness of these concepts.
It is important to note the customization of the domains of analysis, with some domains presented in the models not being evaluated in the studies, while others were complementarily included. This can be seen in Jeong et al. [ 34 ], where the "intervention" domain in the evaluation with the RE-AIM framework reinforced the aim of theoretical approaches such as guiding the process and not determining norms. Despite this, few limitations were reported for the models, suggesting that the use of models in these studies reflects the application of these models to defined contexts without a deep critical analysis of their domains.
This review has several limitations. First, only a few studies and methodological guidelines that explicitly present models and frameworks for assessing the implementation of CPGs have been found. This means that few alternative models could be analyzed and presented in this review. Second, this review adopted multiple analytical categories (e.g., level of use, health service, target group, and domains evaluated), whose terminology has varied enormously in the studies and documents selected, especially for the "domains evaluated" category. This difficulty in harmonizing the taxonomy used in the area has already been reported [ 26 ] and has significant potential to confuse. For this reason, studies and initiatives are needed to align understandings between concepts and, as far as possible, standardize them. Third, in some studies/documents, the information extracted was not clear about the analytical category. This required an in-depth interpretative process of the studies, which was conducted in pairs to avoid inappropriate interpretations.
This study contributes to the literature and clinical practice management by describing models and frameworks specifically used to assess the implementation of CPGs based on their level of use, type of health service, target group related to the CPG, and the evaluated domains. While there are existing reviews on the theories, frameworks, and models used in implementation science, this review addresses aspects not previously covered in the literature. This valuable information can assist stakeholders (such as politicians, clinicians, researchers, etc.) in selecting or adapting the most appropriate model to assess CPG implementation based on their health context. Furthermore, this study is expected to guide future research on developing or adapting models to assess the implementation of CPGs in various contexts.
The use of models and frameworks to evaluate the implementation remains a challenge. Studies should clearly state the level of model use, the type of health service evaluated, and the target group. The domains evaluated in these models may need adaptation to specific contexts. Nevertheless, utilizing models to assess CPGs implementation is crucial as they can guide a more thorough and systematic evaluation process, aiding in the continuous improvement of CPGs implementation. The findings of this systematic review offer valuable insights for stakeholders in selecting or adjusting models and frameworks for CPGs evaluation, supporting future theoretical advancements and research.
Abbreviations.
Australian Department of Health and Aged Care
Canadian Agency for Drugs and Technologies in Health
Centers for Disease Control and
Consolidated Framework for Implementation Research
Cumulative Index to Nursing and Allied Health Literature
Clinical practice guideline
Centre for Reviews and Dissemination
Guidelines International Networks
Getting Research into Practice
Health Systems Evidence
Institute of Medicine
The Joanna Briggs Institute
Ministry of Health of Brazil
Ministerio de Sanidad y Política Social
National Health and Medical Research Council
National Institute for Health and Care Excellence
Promoting action on research implementation in health systems framework
Predisposing, Reinforcing and Enabling Constructs in Educational Diagnosis and Evaluation-Policy, Regulatory, and Organizational Constructs in Educational and Environmental Development
Preferred Reporting Items for Systematic Reviews and Meta-Analyses
International Prospective Register of Systematic Reviews
Reach, effectiveness, adoption, implementation, and maintenance framework
Healthcare Improvement Scotland
United States of America
Virtual Health Library
World Health Organization
Medicine I of. Crossing the Quality Chasm: A New Health System for the 21st Century. 2001. Available from: http://www.nap.edu/catalog/10027 . Cited 2022 Sep 29.
Field MJ, Lohr KN. Clinical Practice Guidelines: Directions for a New Program. Washington DC: National Academy Press. 1990. Available from: https://www.nap.edu/read/1626/chapter/8 Cited 2020 Sep 2.
Dawson A, Henriksen B, Cortvriend P. Guideline Implementation in Standardized Office Workflows and Exam Types. J Prim Care Community Heal. 2019;10. Available from: https://pubmed.ncbi.nlm.nih.gov/30900500/ . Cited 2020 Jul 15.
Unverzagt S, Oemler M, Braun K, Klement A. Strategies for guideline implementation in primary care focusing on patients with cardiovascular disease: a systematic review. Fam Pract. 2014;31(3):247–66. Available from: https://academic.oup.com/fampra/article/31/3/247/608680 . Cited 2020 Nov 5.
Article PubMed Google Scholar
Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10(1):1–13. Available from: https://implementationscience.biomedcentral.com/articles/10.1186/s13012-015-0242-0 . Cited 2022 May 1.
Article Google Scholar
Mangana F, Massaquoi LD, Moudachirou R, Harrison R, Kaluangila T, Mucinya G, et al. Impact of the implementation of new guidelines on the management of patients with HIV infection at an advanced HIV clinic in Kinshasa, Democratic Republic of Congo (DRC). BMC Infect Dis. 2020;20(1):N.PAG-N.PAG. Available from: https://search.ebscohost.com/login.aspx?direct=true&db=c8h&AN=146325052& .
Browman GP, Levine MN, Mohide EA, Hayward RSA, Pritchard KI, Gafni A, et al. The practice guidelines development cycle: a conceptual tool for practice guidelines development and implementation. 2016;13(2):502–12. https://doi.org/10.1200/JCO.1995.13.2.502 .
Killeen SL, Donnellan N, O’Reilly SL, Hanson MA, Rosser ML, Medina VP, et al. Using FIGO Nutrition Checklist counselling in pregnancy: A review to support healthcare professionals. Int J Gynecol Obstet. 2023;160(S1):10–21. Available from: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85146194829&doi=10.1002%2Fijgo.14539&partnerID=40&md5=d0f14e1f6d77d53e719986e6f434498f .
Bauer MS, Damschroder L, Hagedorn H, Smith J, Kilbourne AM. An introduction to implementation science for the non-specialist. BMC Psychol. 2015;3(1):1–12. Available from: https://bmcpsychology.biomedcentral.com/articles/10.1186/s40359-015-0089-9 . Cited 2020 Nov 5.
Cassetti V, M VLR, Pola-Garcia M, AM G, J JPC, L APDT, et al. An integrative review of the implementation of public health guidelines. Prev Med reports. 2022;29:101867. Available from: http://www.epistemonikos.org/documents/7ad499d8f0eecb964fc1e2c86b11450cbe792a39 .
Eccles MP, Mittman BS. Welcome to implementation science. Implementation Science BioMed Central. 2006. Available from: https://implementationscience.biomedcentral.com/articles/10.1186/1748-5908-1-1 .
Damschroder LJ. Clarity out of chaos: Use of theory in implementation research. Psychiatry Res. 2020;1(283):112461.
Handley MA, Gorukanti A, Cattamanchi A. Strategies for implementing implementation science: a methodological overview. Emerg Med J. 2016;33(9):660–4. Available from: https://pubmed.ncbi.nlm.nih.gov/26893401/ . Cited 2022 Mar 7.
Wang Y, Wong ELY, Nilsen P, Chung VC ho, Tian Y, Yeoh EK. A scoping review of implementation science theories, models, and frameworks — an appraisal of purpose, characteristics, usability, applicability, and testability. Implement Sci. 2023;18(1):1–15. Available from: https://implementationscience.biomedcentral.com/articles/10.1186/s13012-023-01296-x . Cited 2024 Jan 22.
Moullin JC, Dickson KS, Stadnick NA, Albers B, Nilsen P, Broder-Fingert S, et al. Ten recommendations for using implementation frameworks in research and practice. Implement Sci Commun. 2020;1(1):1–12. Available from: https://implementationsciencecomms.biomedcentral.com/articles/10.1186/s43058-020-00023-7 . Cited 2022 May 20.
Glasgow RE, Vogt TM, Boles SM. *Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89(9):1322. Available from: /pmc/articles/PMC1508772/?report=abstract. Cited 2022 May 22.
Article CAS PubMed PubMed Central Google Scholar
Asada Y, Lin S, Siegel L, Kong A. Facilitators and Barriers to Implementation and Sustainability of Nutrition and Physical Activity Interventions in Early Childcare Settings: a Systematic Review. Prev Sci. 2023;24(1):64–83. Available from: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85139519721&doi=10.1007%2Fs11121-022-01436-7&partnerID=40&md5=b3c395fdd2b8235182eee518542ebf2b .
Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al., editors. Cochrane Handbook for Systematic Reviews of Interventions. version 6. Cochrane; 2022. Available from: https://training.cochrane.org/handbook. Cited 2022 May 23.
Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372. Available from: https://www.bmj.com/content/372/bmj.n71 . Cited 2021 Nov 18.
M C, AD O, E P, JP H, S G. Appendix A: Guide to the contents of a Cochrane Methodology protocol and review. Higgins JP, Green S, eds Cochrane Handb Syst Rev Interv. 2011;Version 5.
Kislov R, Pope C, Martin GP, Wilson PM. Harnessing the power of theorising in implementation science. Implement Sci. 2019;14(1):1–8. Available from: https://implementationscience.biomedcentral.com/articles/10.1186/s13012-019-0957-4 . Cited 2024 Jan 22.
Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan-a web and mobile app for systematic reviews. Syst Rev. 2016;5(1):1–10. Available from: https://systematicreviewsjournal.biomedcentral.com/articles/10.1186/s13643-016-0384-4 . Cited 2022 May 20.
JBI. JBI’s Tools Assess Trust, Relevance & Results of Published Papers: Enhancing Evidence Synthesis. Available from: https://jbi.global/critical-appraisal-tools . Cited 2023 Jun 13.
Drisko JW. Qualitative research synthesis: An appreciative and critical introduction. Qual Soc Work. 2020;19(4):736–53.
Pope C, Mays N, Popay J. Synthesising qualitative and quantitative health evidence: A guide to methods. 2007. Available from: https://books.google.com.br/books?hl=pt-PT&lr=&id=L3fbE6oio8kC&oi=fnd&pg=PR6&dq=synthesizing+qualitative+and+quantitative+health+evidence&ots=sfELNUoZGq&sig=bQt5wt7sPKkf7hwKUvxq2Ek-p2Q#v=onepage&q=synthesizing=qualitative=and=quantitative=health=evidence& . Cited 2022 May 22.
Nilsen P, Birken SA, Edward Elgar Publishing. Handbook on implementation science. 542. Available from: https://www.e-elgar.com/shop/gbp/handbook-on-implementation-science-9781788975988.html . Cited 2023 Apr 15.
Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):1–15. Available from: https://implementationscience.biomedcentral.com/articles/10.1186/1748-5908-4-50 . Cited 2023 Jun 13.
Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38(2):65–76. Available from: https://pubmed.ncbi.nlm.nih.gov/20957426/ . Cited 2023 Jun 11.
Bahtsevani C, Willman A, Khalaf A, Östman M, Ostman M. Developing an instrument for evaluating implementation of clinical practice guidelines: a test-retest study. J Eval Clin Pract. 2008;14(5):839–46. Available from: https://search.ebscohost.com/login.aspx?direct=true&db=c8h&AN=105569473& . Cited 2023 Jan 18.
Balbale SN, Hill JN, Guihan M, Hogan TP, Cameron KA, Goldstein B, et al. Evaluating implementation of methicillin-resistant Staphylococcus aureus (MRSA) prevention guidelines in spinal cord injury centers using the PARIHS framework: a mixed methods study. Implement Sci. 2015;10(1):130. Available from: https://pubmed.ncbi.nlm.nih.gov/26353798/ . Cited 2023 Apr 3.
Article PubMed PubMed Central Google Scholar
Breimaier HE, Heckemann B, Halfens RJGG, Lohrmann C. The Consolidated Framework for Implementation Research (CFIR): a useful theoretical framework for guiding and evaluating a guideline implementation process in a hospital-based nursing practice. BMC Nurs. 2015;14(1):43. Available from: https://search.ebscohost.com/login.aspx?direct=true&db=c8h&AN=109221169& . Cited 2023 Apr 3.
Chou AF, Vaughn TE, McCoy KD, Doebbeling BN. Implementation of evidence-based practices: Applying a goal commitment framework. Health Care Manage Rev. 2011;36(1):4–17. Available from: https://pubmed.ncbi.nlm.nih.gov/21157225/ . Cited 2023 Apr 30.
Porritt K, McArthur A, Lockwood C, Munn Z. JBI Manual for Evidence Implementation. JBI Handbook for Evidence Implementation. JBI; 2020. Available from: https://jbi-global-wiki.refined.site/space/JHEI . Cited 2023 Apr 3.
Jeong HJJ, Jo HSS, Oh MKK, Oh HWW. Applying the RE-AIM Framework to Evaluate the Dissemination and Implementation of Clinical Practice Guidelines for Sexually Transmitted Infections. J Korean Med Sci. 2015;30(7):847–52. Available from: https://pubmed.ncbi.nlm.nih.gov/26130944/ . Cited 2023 Apr 3.
GPC G de trabajo sobre implementación de. Implementación de Guías de Práctica Clínica en el Sistema Nacional de Salud. Manual Metodológico. 2009. Available from: https://portal.guiasalud.es/wp-content/uploads/2019/01/manual_implementacion.pdf . Cited 2023 Apr 3.
Australia C of. A guide to the development, implementation and evaluation of clinical practice guidelines. National Health and Medical Research Council; 1998. Available from: https://www.health.qld.gov.au/__data/assets/pdf_file/0029/143696/nhmrc_clinprgde.pdf .
Health Q. Guideline implementation checklist Translating evidence into best clinical practice. 2022.
Google Scholar
Quittner AL, Abbott J, Hussain S, Ong T, Uluer A, Hempstead S, et al. Integration of mental health screening and treatment into cystic fibrosis clinics: Evaluation of initial implementation in 84 programs across the United States. Pediatr Pulmonol. 2020;55(11):2995–3004. Available from: https://www.embase.com/search/results?subaction=viewrecord&id=L2005630887&from=export . Cited 2023 Apr 3.
Urquhart R, Woodside H, Kendell C, Porter GA. Examining the implementation of clinical practice guidelines for the management of adult cancers: A mixed methods study. J Eval Clin Pract. 2019;25(4):656–63. Available from: https://search.ebscohost.com/login.aspx?direct=true&db=c8h&AN=137375535& . Cited 2023 Apr 3.
Yinghui J, Zhihui Z, Canran H, Flute Y, Yunyun W, Siyu Y, et al. Development and validation for evaluation of an evaluation tool for guideline implementation. Chinese J Evidence-Based Med. 2022;22(1):111–9. Available from: https://www.embase.com/search/results?subaction=viewrecord&id=L2016924877&from=export .
Breimaier HE, Halfens RJG, Lohrmann C. Effectiveness of multifaceted and tailored strategies to implement a fall-prevention guideline into acute care nursing practice: a before-and-after, mixed-method study using a participatory action research approach. BMC Nurs. 2015;14(1):18. Available from: https://search.ebscohost.com/login.aspx?direct=true&db=c8h&AN=103220991& .
Lai J, Maher L, Li C, Zhou C, Alelayan H, Fu J, et al. Translation and cross-cultural adaptation of the National Health Service Sustainability Model to the Chinese healthcare context. BMC Nurs. 2023;22(1). Available from: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85153237164&doi=10.1186%2Fs12912-023-01293-x&partnerID=40&md5=0857c3163d25ce85e01363fc3a668654 .
Zhao J, Li X, Yan L, Yu Y, Hu J, Li SA, et al. The use of theories, frameworks, or models in knowledge translation studies in healthcare settings in China: a scoping review protocol. Syst Rev. 2021;10(1):13. Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7792291 .
Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging research and practice: models for dissemination and implementation research. Am J Prev Med. 2012;43(3):337–50. Available from: https://pubmed.ncbi.nlm.nih.gov/22898128/ . Cited 2023 Apr 4.
Phulkerd S, Lawrence M, Vandevijvere S, Sacks G, Worsley A, Tangcharoensathien V. A review of methods and tools to assess the implementation of government policies to create healthy food environments for preventing obesity and diet-related non-communicable diseases. Implement Sci. 2016;11(1):1–13. Available from: https://implementationscience.biomedcentral.com/articles/10.1186/s13012-016-0379-5 . Cited 2022 May 1.
Buss PM, Pellegrini FA. A Saúde e seus Determinantes Sociais. PHYSIS Rev Saúde Coletiva. 2007;17(1):77–93.
Pereira VC, Silva SN, Carvalho VKSS, Zanghelini F, Barreto JOMM. Strategies for the implementation of clinical practice guidelines in public health: an overview of systematic reviews. Heal Res Policy Syst. 2022;20(1):13. Available from: https://health-policy-systems.biomedcentral.com/articles/10.1186/s12961-022-00815-4 . Cited 2022 Feb 21.
Grimshaw J, Eccles M, Tetroe J. Implementing clinical guidelines: current evidence and future implications. J Contin Educ Health Prof. 2004;24 Suppl 1:S31-7. Available from: https://pubmed.ncbi.nlm.nih.gov/15712775/ . Cited 2021 Nov 9.
Lotfi T, Stevens A, Akl EA, Falavigna M, Kredo T, Mathew JL, et al. Getting trustworthy guidelines into the hands of decision-makers and supporting their consideration of contextual factors for implementation globally: recommendation mapping of COVID-19 guidelines. J Clin Epidemiol. 2021;135:182–6. Available from: https://pubmed.ncbi.nlm.nih.gov/33836255/ . Cited 2024 Jan 25.
Lenzer J. Why we can’t trust clinical guidelines. BMJ. 2013;346(7913). Available from: https://pubmed.ncbi.nlm.nih.gov/23771225/ . Cited 2024 Jan 25.
Molino C de GRC, Ribeiro E, Romano-Lieber NS, Stein AT, de Melo DO. Methodological quality and transparency of clinical practice guidelines for the pharmacological treatment of non-communicable diseases using the AGREE II instrument: A systematic review protocol. Syst Rev. 2017;6(1):1–6. Available from: https://systematicreviewsjournal.biomedcentral.com/articles/10.1186/s13643-017-0621-5 . Cited 2024 Jan 25.
Albers B, Mildon R, Lyon AR, Shlonsky A. Implementation frameworks in child, youth and family services – Results from a scoping review. Child Youth Serv Rev. 2017;1(81):101–16.
Download references
Not applicable
This study is supported by the Fundação de Apoio à Pesquisa do Distrito Federal (FAPDF). FAPDF Award Term (TOA) nº 44/2024—FAPDF/SUCTI/COOBE (SEI/GDF – Process 00193–00000404/2024–22). The content in this article is solely the responsibility of the authors and does not necessarily represent the official views of the FAPDF.
Authors and affiliations.
Department of Management and Incorporation of Health Technologies, Ministry of Health of Brazil, Brasília, Federal District, 70058-900, Brazil
Nicole Freitas de Mello & Dalila Fernandes Gomes
Postgraduate Program in Public Health, FS, University of Brasília (UnB), Brasília, Federal District, 70910-900, Brazil
Nicole Freitas de Mello, Dalila Fernandes Gomes & Jorge Otávio Maia Barreto
René Rachou Institute, Oswaldo Cruz Foundation, Belo Horizonte, Minas Gerais, 30190-002, Brazil
Sarah Nascimento Silva
Oswaldo Cruz Foundation - Brasília, Brasília, Federal District, 70904-130, Brazil
Juliana da Motta Girardi & Jorge Otávio Maia Barreto
You can also search for this author in PubMed Google Scholar
NFM and JOMB conceived the idea and the protocol for this study. NFM conducted the literature search. NFM, SNS, JMG and JOMB conducted the data collection with advice and consensus gathering from JOMB. The NFM and JMG assessed the quality of the studies. NFM and DFG conducted the data extraction. NFM performed the analysis and synthesis of the results with advice and consensus gathering from JOMB. NFM drafted the manuscript. JOMB critically revised the first version of the manuscript. All the authors revised and approved the submitted version.
Correspondence to Nicole Freitas de Mello .
Ethics approval and consent to participate, consent for publication, competing interests.
The authors declare that they have no competing interests.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
13012_2024_1389_moesm1_esm.docx.
Additional file 1: PRISMA checklist. Description of data: Completed PRISMA checklist used for reporting the results of this systematic review.
13012_2024_1389_moesm3_esm.doc.
Additional file 3: JBI’s critical appraisal tools for cross-sectional studies. Description of data: JBI’s critical appraisal tools to assess the trustworthiness, relevance, and results of the included studies. This is specific for cross-sectional studies.
Additional file 4: JBI’s critical appraisal tools for qualitative studies. Description of data: JBI’s critical appraisal tools to assess the trustworthiness, relevance, and results of the included studies. This is specific for qualitative studies.
Additional file 5: Methodological quality assessment results for cross-sectional studies. Description of data: Methodological quality assessment results for cross-sectional studies using JBI’s critical appraisal tools.
Additional file 6: Methodological quality assessment results for the qualitative studies. Description of data: Methodological quality assessment results for qualitative studies using JBI’s critical appraisal tools.
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .
Reprints and permissions
Cite this article.
Freitas de Mello, N., Nascimento Silva, S., Gomes, D.F. et al. Models and frameworks for assessing the implementation of clinical practice guidelines: a systematic review. Implementation Sci 19 , 59 (2024). https://doi.org/10.1186/s13012-024-01389-1
Download citation
Received : 06 February 2024
Accepted : 01 August 2024
Published : 07 August 2024
DOI : https://doi.org/10.1186/s13012-024-01389-1
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
ISSN: 1748-5908
BMC Health Services Research volume 24 , Article number: 908 ( 2024 ) Cite this article
48 Accesses
Metrics details
The use of telehealth has proliferated to the point of being a common and accepted method of healthcare service delivery. Due to the rapidity of telehealth implementation, the evidence underpinning this approach to healthcare delivery is lagging, particularly when considering the uniqueness of some service users, such as those in rural areas. This research aimed to address the current gap in knowledge related to the factors critical for the successful delivery of telehealth to rural populations.
This research used a qualitative descriptive design to explore telehealth service provision in rural areas from the perspective of clinicians and describe factors critical to the effective delivery of telehealth in rural contexts. Semi-structured interviews were conducted with clinicians from allied health and nursing backgrounds working in child and family nursing, allied health services, and mental health services. A manifest content analysis was undertaken using the Framework approach.
Sixteen health professionals from nursing, clinical psychology, and social work were interviewed. Participants mostly identified as female (88%) and ranged in age from 26 to 65 years with a mean age of 47 years. Three overarching themes were identified: (1) Navigating the role of telehealth to support rural healthcare; (2) Preparing clinicians to engage in telehealth service delivery; and (3) Appreciating the complexities of telehealth implementation across services and environments.
This research suggests that successful delivery of telehealth to rural populations requires consideration of the context in which telehealth services are being delivered, particularly in rural and remote communities where there are challenges with resourcing and training to support health professionals. Rural populations, like all communities, need choice in healthcare service delivery and models to increase accessibility. Preparation and specific, intentional training for health professionals on how to transition to and maintain telehealth services is a critical factor for delivery of telehealth to rural populations. Future research should further investigate the training and supports required for telehealth service provision, including who, when and what training will equip health professionals with the appropriate skill set to deliver rural telehealth services.
Peer Review reports
Telehealth is a commonly utilised application in rural health settings due to its ability to augment service delivery across wide geographical areas. During the COVID-19 pandemic, the use of telehealth became prolific as it was rapidly adopted across many new fields of practice to allow for healthcare to continue despite requirements for physical distancing. In Australia, the Medicare Benefits Scheme (MBS) lists health services that are subsidised by the federal government. Telehealth items were extensively added to these services as part of the response to COVID-19 [ 1 ]. Although there are no longer requirements for physical distancing in Australia, many health providers have continued to offer services via telehealth, particularly in rural areas [ 2 , 3 ]. For the purpose of this research, telehealth was defined as a consultation with a healthcare provider by phone or video call [ 4 ]. Telehealth service provision in rural areas requires consideration of contextual factors such as access to reliable internet, community members’ means to finance this access [ 5 ], and the requirement for health professionals to function across a broad range of specialty skills. These factors present a case for considering the delivery of telehealth in rural areas as a unique approach, rather than one portion of the broader use of telehealth.
Research focused on rural telehealth has proliferated alongside the rapid implementation of this service mode. To date, there has been a focus on the impact of telehealth on areas such as client access and outcomes [ 2 ], client and health professional satisfaction with services and technology [ 6 ], direct and indirect costs to the patient (travel cost and time), healthcare service provider staffing, lower onsite healthcare resource utilisation, improved physician recruitment and retention, and improved client access to care and education [ 7 , 8 ]. In terms of service implementation, these elements are important but do not outline the broader implementation factors critical to the success of telehealth delivery in rural areas. One study by Sutarsa et al. explored the implications of telehealth as a replacement for face-to-face services from the perspectives of general practitioners and clients [ 9 ] and articulated that telehealth services are not a like-for-like service compared to face-to-face modes. Research has also highlighted the importance of understanding the experience of telehealth in rural Australia across different population groups, including Aboriginal and Torres Strait Islander peoples, and the need to consider culturally appropriate services [ 10 , 11 , 12 , 13 ].
Research is now required to determine what the critical implementation factors are for telehealth delivery in rural areas. This type of research would move towards answering calls for interdisciplinary, qualitative, place-based research [ 12 ] that explores factors required for the sustainability and usability of telehealth in rural areas. It would also contribute to the currently limited understanding of implementation factors required for telehealth delivery to rural populations [ 14 ]. There is a reasonable expectation that there is consistency in the way health services are delivered, particularly across geographical locations. Due to the rapid implementation of telehealth services, there was limited opportunity to proactively identify factors critical for successful telehealth delivery in rural areas and this has created a lag in policy, process, and training. This research aimed to address this gap in the literature by exploring and describing rural health professionals’ experiences providing telehealth services. For the purpose of this research, rural is inclusive of locations classified as rural or remote (MM3-6) using the Modified Monash Model which considers remoteness and population size in its categorisation [ 15 ].
This research study adopted a qualitative descriptive design as described by Sandelowski [ 16 ]. The purpose of a descriptive study is to document and describe a phenomenon of interest [ 17 ] and this method is useful when researchers seek to understand who was involved, what occurred, and the location of the phenomena of interest [ 18 ]. The phenomenon of interest for this research was the provision of telehealth services to rural communities by health professionals. In line with this, a purposive sampling technique was used to identify participants who have experience of this phenomenon [ 19 ]. This research is reported in line with the consolidated criteria for reporting qualitative research [ 20 ] to enhance transparency and trustworthiness of the research process and results [ 21 ].
This research aimed to:
Explore telehealth service provision in rural areas from the perspective of clinicians.
Describe factors critical to the successful delivery of telehealth in rural contexts.
People eligible to participate in the research were allied health (using the definition provided by Allied Health Professions Australia [ 22 ]) or nursing staff who delivered telehealth services to people living in the geographical area covered by two rural local health districts in New South Wales, Australia (encompassing rural areas MM3-6). Health organisations providing telehealth service delivery in the southwestern and central western regions of New South Wales were identified through the research teams’ networks and invited to be part of the research.
Telehealth adoption in these organisations was intentionally variable to capture different experiences and ranged from newly established (prompted by COVID-19) to well established (> 10 years of telehealth use). Organisations included government, non-government, and not-for-profit health service providers offering child and family nursing, allied health services, and mental health services. Child and family nursing services were delivered by a government health service and a not-for-profit specialist service, providing health professional advice, education, and guidance to families with a baby or toddler. Child and family nurses were in the same geographical region as the families receiving telehealth. Transition to telehealth services was prompted by the COVID-19 pandemic. The participating allied health service was a large, non-government provider of allied health services to regional New South Wales. Allied health professionals were in the same region as the client receiving telehealth services. Use of telehealth in this organisation had commenced prior to the COVID-19 pandemic. Telehealth mental health services were delivered by an emergency mental health team, located at a large regional hospital to clients in another healthcare facility or location to which the health professional could not be physically present (typically a lower acuity health service in a rural location).
Once organisations agreed to disseminate the research invitation, a key contact person employed at each health organisation invited staff to participate via email. Staff were provided with contact details of the research team in the email invitation. All recruitment and consent processes were managed by the research team to minimise risk of real or perceived coercion between staff and the key contact person, who was often in a supervisory or managerial position within the organisation. Data were collected using semi-structured interviews using an online platform with only the interviewer and participant present. Interviews were conducted by a research team member with training in qualitative data collection during November and December 2021 and were transcribed verbatim by a professional transcribing service. All participants were offered the opportunity to review their transcript and provide feedback, however none opted to do so. Data saturation was not used as guidance for participant numbers, taking the view of Braun and Clarke [ 23 ] that meaning is generated through the analysis rather than reaching a point of saturation.
Researchers undertook a manifest content analysis of the data using the Framework approach developed by Ritchie and Spencer [ 24 ]. All four co-authors were involved in the data analysis process. Framework uses five stages for analysis including (1) familiarisation (2) identifying a thematic framework based on emergent overarching themes, (3) application of the coding framework to the interview transcripts [indexing], (4) reviewing and charting of themes and subthemes, and (5) mapping and interpretation [ 24 , p. 178]. The research team analysed a common interview initially, identified codes and themes, then independently applied these to the remaining interviews. Themes were centrally recorded, reviewed, and discussed by the research team prior to inclusion into the thematic framework. Final themes were confirmed via collaborative discussion and consensus. The iterative process used to review and code data was recorded into an Excel spreadsheet to ensure auditability and credibility, and to enhance the trustworthiness of the analysis process.
This study was approved by the Greater Western NSW Human Research Ethics Committee and Charles Sturt University Human Research Ethics Committee (approval numbers: 2021/ETH00088 and H21215). All participants provided written consent.
Eighteen health professionals consented to be interviewed. Two were lost to follow-up, therefore semi-structured interviews were conducted with 16 of these health professionals, the majority of which were from the discipline of nursing ( n = 13, 81.3%). Participant demographics and their pseudonyms are shown in Table 1 .
Participants mostly identified as female ( n = 14, 88%) and ranged in age from 26 to 65 years with a mean age of 47 years. Participants all delivered services to rural communities in the identified local health districts and resided within the geographical area they serviced. The participants resided in areas classified as MM3-6 but were most likely to reside in an area classified MM3 (81%). Average interview time was 38 min, and all interviews were conducted online via Zoom.
Three overarching themes were identified through the analysis of interview transcripts with health professionals. These themes were: (1) Navigating the role of telehealth to support rural healthcare; (2) Preparing clinicians to engage in telehealth service delivery; and (3) Appreciating the complexities of telehealth implementation across services and environments.
The first theme described clinicians’ experiences of using telehealth to deliver healthcare to rural communities, including perceived benefits and challenges to acceptance, choice, and access. Interview participants identified several factors that impacted on or influenced the way they could deliver telehealth, and these were common across the different organisational structures. Clinicians highlighted the need to consider how to effectively navigate the role of telehealth in supporting their practice, including when it would enhance their practice, and when it might create barriers. The ability to improve rural service provision through greater access was commonly discussed by participants. In terms of factors important for telehealth delivery in rural contexts, the participants demonstrated that knowledge of why and how telehealth was used were important, including the broadened opportunity for healthcare access and an understanding of the benefits and challenges of providing these services.
Participants described a range of benefits using telehealth to contact small, rural locations and facilitate greater access to services closer to home. This was particularly evident when there was lack of specialist support in these areas. These opportunities meant that rural people could receive timely care that they required, without the burden of travelling significant distances to access health services.
The obvious thing in an area like this, is that years ago, people were being transported three hours just to see us face to face. It’s obviously giving better, more timely access to services. (Patrick)
Staff access to specialist support was seen as an important aspect for rural healthcare by participants, because of the challenges associated with lack of staffing and resources within these areas which potentially increased the risks for staff in these locations, particularly when managing clients with acute mental illnesses.
Within the metro areas they’ve got so many staff and so many hospitals and they can manage mental health patients quite well within those facilities, but with us some of these hospitals will have one RN on overnight and it’s just crappy for them, and so having us able to do video link, it kind of takes the pressure off and we’re happy to make the decisions and the risky decisions for what that person needs. (Tracey)
Participants described how the option to use telehealth to provide specialised knowledge and expertise to support local health staff in rural hospitals likely led to more appropriate outcomes for clients wanting to be able to remain in their community. Conversely, Amber described the implications if telehealth was not available.
If there was some reason why the telehealth wasn’t available… quite often, I suppose the general process be down to putting the pressure on the nursing and the medical staff there to make a decision around that person, which is not a fair or appropriate thing for them to do. (Amber)
Complementing the advantage of reduced travel time to access services, was the ability for clients to access additional support via telehealth, which was perceived as a benefit. For example, one participant described how telehealth was useful for troubleshooting client’s problems rather than waiting for their next scheduled appointment.
If a mum rings you with an issue, you can always say to them “are you happy to jump onto My Virtual Care with me now?” We can do that, do a consult over My Virtual Care. Then I can actually gauge how mum is. (Jade)
While accessibility was a benefit, participants highlighted that rural communities need to be provided with choice, rather than the assumption that telehealth be the preferred option for everyone, as many rural clients want face-to-face services.
They’d all prefer, I think, to be able to see someone in person. I think that’s generally what NSW rural [want] —’cause I’m from country towns as well—there’s no substitute, like I said, for face-to-face assessment. (Adam)
Other, more practical limitations of broad adoption of telehealth raised by the participants included issues with managing technology and variability in internet connectivity.
For many people in the rural areas, it’s still an issue having that regular [internet] connection that works all the time. I think it’s a great option but I still think it’s something that some rural people will always have some challenges with because it’s not—there’s so many black spots and so many issues still with the internet connection in rural areas. Even in town, there’s certain areas that are still having lots of problems. (Chloe)
Participants also identified barriers related to assumptions that all clients will have access to technology and have the necessary data to undertake a telehealth consultation, which wasn’t always the case, particularly with individuals experiencing socioeconomic disadvantage.
A lot of [Aboriginal] families don’t actually have access to telehealth services. Unless they use their phone. If they have the technology on their phones. I found that was a little bit of an issue to try and help those particular clients to get access to the internet, to have enough data on their phone to make that call. There was a lot of issues and a lot of things that we were putting in complaints about as they were going “we’re using up a lot of these peoples’ data and they don’t have internet in their home.” (Evelyn).
Other challenges identified by the participants were related to use of telehealth for clients that required additional support. Many participants talked about the complexities of using an interpreter during a telehealth consultation for culturally and linguistically diverse clients.
Having interpreters, that’s another element that’s really, really difficult because you’re doing video link, but then you’ve also got the phone on speaker and you’re having this three-way conversation. Even that, in itself, that added element on video link is really, really tough. It’s a really long process. (Tracey)
In summary, this theme described some of the benefits and constraints when using telehealth for the delivery of rural health services. The participants demonstrated the importance of understanding the needs and contexts of individual clients, and accounting for this when making decisions to incorporate telehealth into their service provision. Understanding how and why telehealth can be implemented in rural contexts was an important foundation for the delivery of these services.
The preparation required for clinicians to engage with telehealth service delivery was highlighted and the participants described the unique set of skills required to effectively build rapport, engage, and carry out assessments with clients. For many participants who had not routinely used telehealth prior to the COVID-19 pandemic, the transition to using telehealth had been rapid. The participants reflected on the implications of rapidly adopting these new practices and the skills they required to effectively deliver care using telehealth. These skills were critical for effective delivery of telehealth to rural communities.
The rapid and often unsupported implementation of telehealth in response to the COVID-19 pandemic resulted in clinicians needing to learn and adapt to telehealth, often without being taught or with minimal instruction.
We had to do virtual, virtually overnight we were changed to, “Here you go. Do it this way,” without any real education. It was learned as we went because everybody was in the same boat. Everyone was scrabbling to try and work out how to do it. (Chloe)
In addition to telehealth services starting quickly, telehealth provision requires clinicians to use a unique set of skills. Therapeutic interventions and approaches were identified as being more challenging when seeing a client through a screen, compared to being physically present together in a room.
The body language is hidden a little bit when you’re on teleconference, whereas when you’re standing up face to face with someone, or standing side by side, the person can see the whole picture. When you’re on the video link, the patient actually can’t—you both can’t see each other wholly. That’s one big barrier. (Adam)
There was an emphasis on communication skills such as active listening and body language that were required when engaging with telehealth. These skills were seen as integral to building rapport and connection. The importance of language in an environment with limited visualisation of body language, is further demonstrated by one participant describing how they tuned into the timing and flow of the conversation to avoid interrupting and how these skills were pertinent for using telehealth.
In the beginning especially, we might do this thing where I think they’ve finished or there’s a bit of silence, so I go to speak and then they go to speak at the same time, and that’s different because normally in person you can really gauge that quite well if they’ve got more to say. I think those little things mean that you’ve got to work a bit harder and you’ve got to bring those things to the attention of the client often. (Robyn)
Preparing clinicians to engage in telehealth also required skills in sharing clear and consistent information with clients about the process of interacting via telehealth. This included information to reassure the client that the telehealth appointment was private as well as prepare them for potential interruptions due to connection issues.
I think being really explicitly clear about the fact that with our setups we have here, no one can dial in, no one else is in my room even watching you. We’re not recording, and there’s a lot of extra information, I think around that we could be doing better in terms of delivering to the person. (Amber)
Telehealth was often described as a window and not a view of the whole person which presented limitations for clinicians, such as seeing nuance of expression. Participants described the difficulties of assessing a client using telehealth when you cannot see the whole picture such as facial expressions, movement, behaviour, interactions with others, dress, and hygiene.
I found it was quite difficult because you couldn’t always see the actual child or the baby, especially if they just had their phone. You couldn’t pick up the body language. You couldn’t always see the facial expressions. You couldn’t see the child and how the child was responding. It did inhibit a lot of that side of our assessing. Quite often you’d have to just write, “Unable to view child.” You might be able to hear them but you couldn’t see them. (Chloe)
Due to the window view, the participants described how they needed to pay even greater attention to eye contact and tone of voice when engaging with clients via telehealth.
I think the eye contact is still a really important thing. Getting the flow of what they’re comfortable with a little bit too. It’s being really careful around the tone of voice as well too, because—again, that’s the same for face-to-face, but be particularly careful of it over telehealth. (Amber)
This theme demonstrates that there are unique and nuanced skills required by clinicians to effectively engage in provision of rural healthcare services via telehealth. Many clinicians described how the rapid uptake of telehealth required them to quickly adapt to providing telehealth services, and they had to modify their approach rather than replicate what they would do in face-to-face contexts. Appreciating the different skills sets required for telehealth practice was perceived as an important element in supporting clinicians to deliver quality healthcare.
It was commonly acknowledged that there needed to be an appreciation by clinicians of the multiple different environments that telehealth was being delivered in, as well as the types of consultations being undertaken. This was particularly important when well-resourced large regional settings were engaging with small rural services or when clinicians were undertaking consultations within a client’s home.
One of the factors identified as important for the successful delivery of services via telehealth was an understanding of the location and context that was being linked into. Participants regularly talked about the challenges when undertaking a telehealth consultation with clients at home, which impacted the quality of the consultation as it was easy to “ lose focus” (Kelsey) and become distracted.
Instead of just coming in with one child, they had all the kids, all wanting their attention. I also found that babies and kids kept pressing the screen and would actually disconnect us regularly. (Chloe)
For participants located in larger regional locations delivering telehealth services to smaller rural hospitals, it was acknowledged that not all services had equivalent resources, skills, and experience with this type of healthcare approach.
They shouldn’t have to do—they’ve gotta double-click here, login there. They’re relying on speakers that don’t work. Sometimes they can’t get the cameras working. I think telehealth works as long as it’s really user friendly. I think nurses—as a nurse, we’re not supposed to be—I know IT’s in our job criteria, but not to the level where you’ve got to have a degree in technology to use it. (Adam)
Participants also recognised that supporting a client through a telehealth consultation adds workload stress as rural clinicians are often having pressures with caseloads and are juggling multiple other tasks while trying to trouble shoot technology issues associated with a telehealth consultation.
Most people are like me, not great with computers. Sometimes the nurse has got other things in the Emergency Department she’s trying to juggle. (Eleanor)
Participants talked about the challenges that arose due to inconsistencies in where and how the telehealth consultation would be conducted. Concerns about online safety and information privacy were identified by participants.
There’s the privacy issue, particularly when we might see someone and they might be in a bed and they’ve got a laptop there, and they’re not given headphones, and we’re blaring through the speaker at them, and someone’s three meters away in another bed. That’s not good. That’s a bit of a problem. (Patrick)
When telehealth was offered as an option to clients at a remote healthcare site, clinicians noted that some clients were not provided with adequate support and were left to undertake the consultation by themselves which could cause safety risks for the client and an inability for the telehealth clinician to control the situation.
There were some issues with patients’ safety though. Where the telehealth was located was just in a standard consult room and there was actually a situation where somebody self-harmed with a needle that was in a used syringe box in that room. Then it was like, you just can’t see high risk—environment. (Eleanor)
Additionally, participants noted that they were often using their own office space to conduct telehealth consultations rather than a clinical room which meant there were other considerations to think about.
Now I always lock my room so nobody can enter. That’s a nice little lesson learnt. I had a consult with a mum and some other clinicians came into my room and I thought “oh my goodness. I forgot to lock.” I’m very mindful now that I lock. (Jade)
This theme highlights the complexities that exist when implementing telehealth across a range of rural healthcare settings and environments. It was noted by participants that there were variable skills and experience in using telehealth across staff located in smaller rural areas, which could impact on how effective the consultation was. Participants identified the importance of purposely considering the environment in which the telehealth consultation was being held, ensuring that privacy, safety, and distractibility concerns have been adequately addressed before the consultation begins. These factors were considered important for the successful implementation of telehealth in rural areas.
This study explored telehealth service delivery in various rural health contexts, with 16 allied health and nursing clinicians who had provided telehealth services to people living in rural communities prior to, and during the COVID-19 pandemic. Reflections gained from clinicians were analysed and reported thematically. Major themes identified were clinicians navigating the role of telehealth to support rural healthcare, the need to prepare clinicians to engage in telehealth service delivery and appreciating the complexities of telehealth implementation across services and environments.
The utilisation of telehealth for health service delivery has been promoted as a solution to resolve access and equity issues, particularly for rural communities who are often impacted by limited health services due to distance and isolation [ 6 ]. This study identified a range of perceived benefits for both clients and clinicians, such as improved access to services across large geographic distances, including specialist care, and reduced travel time to engage with a range of health services. These findings are largely supported by the broader literature, such as the systematic review undertaken by Tsou et al. [ 25 ] which found that telehealth can improve clinical outcomes and increase the timeliness to access services, including specialist knowledge. Clinicians in our study also noted the benefits of using telehealth for ad hoc clinical support outside of regular appointment times, which to date has not been commonly reported in the literature as a benefit. Further investigation into this aspect may be warranted.
The findings from this study identify a range of challenges that exist when delivering health services within a virtual context. It was common for participants to highlight that personal preference for face-to-face sessions could not always be accommodated when implementing telehealth services in rural areas. The perceived technological possibilities to improve access can have unintended consequences for community members which may contribute to lack of responsiveness to community needs [ 12 ]. It is therefore important to understand the client and their preferences for using telehealth rather than making assumptions on the appropriateness of this type of health service delivery [ 26 ]. As such, telehealth is likely to function best when there is a pre-established relationship between the client and clinician, with clients who have a good knowledge of their personal health and have access to and familiarity with digital technology [ 13 ]. Alternatively, it is appropriate to consider how telehealth can be a supplementary tool rather than a stand-alone service model replacing face-to-face interactions [ 13 ].
As identified in this study, managing technology and internet connectivity are commonly reported issues for rural communities engaging in telehealth services [ 27 , 28 ]. Additionally, it was highlighted that within some rural communities with higher socioeconomic disadvantage, limited access to an appropriate level of technology and the required data to undertake a telehealth consult was a deterrent to engage in these types of services. Mathew et al. [ 13 ] found in their study that bandwidth impacted video consultations, which was further compromised by weather conditions, and clients without smartphones had difficulty accessing relevant virtual consultation software.
The findings presented here indicate that while telehealth can be a useful model, it may not be suitable for all clients or client groups. For example, the use of interpreters in telehealth to support clients was a key challenge identified in this study. This is supported by Mathew et al. [ 13 ] who identified that language barriers affected the quality of telehealth consultations and accessing appropriate interpreters was often difficult. Consideration of health and digital literacy, access and availability of technology and internet, appropriate client selection, and facilitating client choice are all important drivers to enhance telehealth experiences [ 29 ]. Nelson et al. [ 6 ] acknowledged the barriers that exist with telehealth, suggesting that ‘it is not the groups that have difficulty engaging, it is that telehealth and digital services are hard to engage with’ (p. 8). There is a need for telehealth services to be delivered in a way that is inclusive of different groups, and this becomes more pertinent in rural areas where resources are not the same as metropolitan areas.
The findings of this research highlight the unique set of skills required for health professionals to translate their practice across a virtual medium. The participants described these modifications in relation to communication skills, the ability to build rapport, conduct healthcare assessments, and provide treatment while looking at a ‘window view’ of a person. Several other studies have reported similar skillsets that are required to effectively use telehealth. Uscher-Pines et al. [ 30 ] conducted research on the experiences of psychiatrists moving to telemedicine during the COVID-19 pandemic and noted challenges affecting the quality of provider-patient interactions and difficulty conducting assessment through the window of a screen. Henry et al. [ 31 ] documented a list of interpersonal skills considered essential for the use of telehealth encompassing attributes related to set-up, verbal and non-verbal communication, relationship building, and environmental considerations.
Despite the literature uniformly agreeing that telehealth requires a unique skill set there is no agreement on how, when and for whom education related to these skills should be provided. The skills required for health professionals to use telehealth have been treated as an add-on to health practice rather than as a specialty skill set requiring learning and assessment. This is reflected in research such as that by Nelson et al. [ 6 ] who found that 58% of mental health professionals using telehealth in rural areas were not trained to use it. This gap between training and practice is likely to have arisen from the rapid and widespread implementation of telehealth during the COVID-19 pandemic (i.e. the change in MBS item numbers [ 1 ]) but has not been addressed in subsequent years. For practice to remain in step with policy and funding changes, the factors required for successful implementation of telehealth in rural practice must be addressed.
The lack of clarity around who must undertake training in telehealth and how regularly, presents a challenge for rural health professionals whose skill set has been described as a specialist-generalist that covers a significant breadth of knowledge [ 32 ]. Maintaining knowledge currency across this breadth is integral and requires significant resources (time, travel, money) in an environment where access to education can be limited [ 33 ]. There is risk associated with continually adding skills on to the workload of rural health professionals without adequate guidance and provision for time to develop and maintain these skills.
While the education required to equip rural health professionals with the skills needed to effectively use telehealth in their practice is developing, until education requirements are uniformly understood and made accessible this is likely to continue to pose risk for rural health professionals and the community members accessing their services. Major investment in the education of all health professionals in telehealth service delivery, no matter the context, has been identified as critical [ 6 ].
This research highlights that the experience of using telehealth in rural communities is unique and thus a ‘one size fits all’ approach is not helpful and can overlook the individual needs of a community. Participants described experiences of using telehealth that were different between rural communities, particularly for smaller, more remote rural locations where resources and staff support and experience using telehealth were not always equivalent to larger rural locations. Research has indicated the need to invest in resourcing and education to support expansion of telehealth, noting this is particularly important in rural, regional, and remote areas [ 34 ]. Our study recognises that this is an ongoing need as rural communities continue to have diverse experiences of using telehealth services. Careful consideration of the context of individual rural health services, including the community needs, location, and resource availability on both ends of the consultation is required. Use of telehealth cannot have the same outcomes in every area. It is imperative that service providers and clinicians delivering telehealth from metropolitan areas to rural communities appreciate and understand the uniqueness of every community, so their approach is tailored and is helpful rather than hindering the experience for people in rural communities.
There are a number of limitations inherent to the design of this study. Participants were recruited via their workplace and thus although steps were taken to ensure they understood the research would not affect their employment, it is possible some employees perceived an association between the research and their employment. Health professionals who had either very positive or very negative experiences with telehealth may have been more likely to participate, as they may be more likely to want to discuss their experiences. In addition to this, only health services that were already connected with the researchers’ networks were invited to participate. Other limitations include purposive sampling, noting that the opinions of the participants are not generalisable. The participant group also represented mostly nursing professionals whose experiences with telehealth may differ from other health disciplines. Finally, it is important to acknowledge that the opinions of the health professionals who participated in the study, may not represent, or align with the experience and opinions of service users.
This study illustrates that while telehealth has provided increased access to services for many rural communities, others have experienced barriers related to variability in connectivity and managing technology. The results demonstrated that telehealth may not be the preferred or appropriate option for some individuals in rural communities and it is important to provide choice. Consideration of the context in which telehealth services are being delivered, particularly in rural and remote communities where there are challenges with resourcing and training to support health professionals, is critical to the success of telehealth service provision. Another critical factor is preparation and specific, intentional training for health professionals on how to transition to manage and maintain telehealth services effectively. Telehealth interventions require a unique skill set and guidance pertaining to who, when and what training will equip health professionals with the appropriate skill set to deliver telehealth services is still to be determined.
The qualitative data collected for this study was de-identified before analysis. Consent was not obtained to use or publish individual level identified data from the participants and hence cannot be shared publicly. The de-identified data can be obtained from the corresponding author on reasonable request.
Commonwealth of Australia. COVID-19 Temporary MBS Telehealth Services: Department of Health and Aged Care, Australian Government; 2022 [ https://www.mbsonline.gov.au/internet/mbsonline/publishing.nsf/Content/Factsheet-TempBB .
Caffery LA, Muurlink OT, Taylor-Robinson AW. Survival of rural telehealth services post‐pandemic in Australia: a call to retain the gains in the ‘new normal’. Aust J Rural Health. 2022;30(4):544–9.
Article PubMed PubMed Central Google Scholar
Shaver J. The state of telehealth before and after the COVID-19 pandemic. Prim Care: Clin Office Pract. 2022;49(4):517–30.
Article Google Scholar
Australian Digital Health Agency. What is telehealth? 2024 [ https://www.digitalhealth.gov.au/initiatives-and-programs/telehealth .
Hirko KA, Kerver JM, Ford S, Szafranski C, Beckett J, Kitchen C, et al. Telehealth in response to the COVID-19 pandemic: implications for rural health disparities. J Am Med Inform Assoc. 2020;27(11):1816–8.
Nelson D, Inghels M, Kenny A, Skinner S, McCranor T, Wyatt S, et al. Mental health professionals and telehealth in a rural setting: a cross sectional survey. BMC Health Serv Res. 2023;23(1):200.
Butzner M, Cuffee Y. Telehealth interventions and Outcomes Across Rural Communities in the United States: Narrative Review. J Med Internet Res. 2021;23(8):NPAG–NPAG.
Calleja Z, Job J, Jackson C. Offsite primary care providers using telehealth to support a sustainable workforce in rural and remote general practice: a rapid review of the literature. Aust J Rural Health. 2023;31(1):5–18.
Article PubMed Google Scholar
Sutarsa IN, Kasim R, Steward B, Bain-Donohue S, Slimings C, Hall Dykgraaf S, et al. Implications of telehealth services for healthcare delivery and access in rural and remote communities: perceptions of patients and general practitioners. Aust J Prim Health. 2022;28(6):522–8.
Bradford NK, Caffery LJ, Smith AC. Telehealth services in rural and remote Australia: a systematic review of models of care and factors influencing success and sustainability. Rural Remote Health. 2016;16(4):1–23.
Google Scholar
Caffery LJ, Bradford NK, Wickramasinghe SI, Hayman N, Smith AC. Outcomes of using telehealth for the provision of healthcare to Aboriginal and Torres Strait Islander people: a systematic review. Aust N Z J Public Health. 2017;41(1):48–53.
Warr D, Luscombe G, Couch D. Hype, evidence gaps and digital divides: Telehealth blind spots in rural Australia. Health. 2023;27(4):588–606.
Mathew S, Fitts MS, Liddle Z, Bourke L, Campbell N, Murakami-Gold L, et al. Telehealth in remote Australia: a supplementary tool or an alternative model of care replacing face-to-face consultations? BMC Health Serv Res. 2023;23(1):1–10.
Campbell J, Theodoros D, Hartley N, Russell T, Gillespie N. Implementation factors are neglected in research investigating telehealth delivery of allied health services to rural children: a scoping review. J Telemedicine Telecare. 2020;26(10):590–606.
Commonwealth of Australia. Modified Monash Model: Department of Health and Aged Care Commonwealth of Australia; 2021 [updated 14 December 2021. https://www.health.gov.au/topics/rural-health-workforce/classifications/mmm .
Sandelowski M. Whatever happened to qualitative description? Research in nursing & health. 2000;23(4):334 – 40.
Marshall C, Rossman GB. Designing qualitative research: Sage; 2014.
Caelli K, Ray L, Mill J. Clear as mud’: toward greater clarity in generic qualitative research. Int J Qualitative Methods. 2003;2(2):1–13.
Tolley EE. Qualitative methods in public health: a field guide for applied research. Second edition. ed. San Francisco, CA: Jossey-Bass & Pfeiffer Imprints, Wiley; 2016.
Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57.
Levitt HM, Motulsky SL, Wertz FJ, Morrow SL, Ponterotto JG. Recommendations for designing and reviewing qualitative research in psychology: promoting methodological integrity. Qualitative Psychol. 2017;4(1):2.
Allied Health Professions Australia. Allied health professions 2024 [ https://ahpa.com.au/allied-health-professions/ .
Braun V, Clarke V. To saturate or not to saturate? Questioning data saturation as a useful concept for thematic analysis and sample-size rationales. Qualitative Res Sport Exerc Health. 2021;13(2):201–16.
Ritchie J, Spencer L. In: Alan B, Burgess RG, editors. Qualitative data analysis for applied policy research. Analyzing qualitative data: Routledge; 1994. pp. 173–94.
Tsou C, Robinson S, Boyd J, Jamieson A, Blakeman R, Yeung J, et al. Effectiveness of telehealth in rural and remote emergency departments: systematic review. J Med Internet Res. 2021;23(11):e30632.
Pullyblank K. A scoping literature review of rural beliefs and attitudes toward telehealth utilization. West J Nurs Res. 2023;45(4):375–84.
Jonnagaddala J, Godinho MA, Liaw S-T. From telehealth to virtual primary care in Australia? A Rapid scoping review. Int J Med Informatics. 2021;151:104470.
Jonasdottir SK, Thordardottir I, Jonsdottir T. Health professionals’ perspective towards challenges and opportunities of telehealth service provision: a scoping review. Int J Med Informatics. 2022;167:104862.
Clay-Williams R, Hibbert P, Carrigan A, Roberts N, Austin E, Fajardo Pulido D, et al. The diversity of providers’ and consumers’ views of virtual versus inpatient care provision: a qualitative study. BMC Health Serv Res. 2023;23(1):724.
Uscher-Pines L, Sousa J, Raja P, Mehrotra A, Barnett ML, Huskamp HA. Suddenly becoming a virtual doctor: experiences of psychiatrists transitioning to telemedicine during the COVID-19 pandemic. Psychiatric Serv. 2020;71(11):1143–50.
Henry BW, Ames LJ, Block DE, Vozenilek JA. Experienced practitioners’ views on interpersonal skills in telehealth delivery. Internet J Allied Health Sci Pract. 2018;16(2):2.
McCullough K, Bayes S, Whitehead L, Williams A, Cope V. Nursing in a different world: remote area nursing as a specialist–generalist practice area. Aust J Rural Health. 2022;30(5):570–81.
Reeve C, Johnston K, Young L. Health profession education in remote or geographically isolated settings: a scoping review. J Med Educ Curric Dev. 2020;7:2382120520943595.
PubMed PubMed Central Google Scholar
Cummings E, Merolli M, Schaper L, editors. Barriers to telehealth uptake in rural, regional, remote Australia: what can be done to expand telehealth access in remote areas. Digital Health: Changing the Way Healthcare is Conceptualised and Delivered: Selected Papers from the 27th Australian National Health Informatics Conference (HIC 2019); 2019: IOS Press.
Download references
The authors would like to acknowledge Georgina Luscombe, Julian Grant, Claire Seaman, Jennifer Cox, Sarah Redshaw and Jennifer Schwarz who contributed to various elements of the project.
The study authors are employed by Three Rivers Department of Rural Health. Three Rivers Department of Rural Health is funded by the Australian Government under the Rural Health Multidisciplinary Training (RHMT) Program.
Authors and affiliations.
Three Rivers Department of Rural Health, Charles Sturt University, Locked Bag 588, Tooma Way, Wagga Wagga, NSW, 2678, Australia
Rebecca Barry, Elyce Green, Kristy Robson & Melissa Nott
You can also search for this author in PubMed Google Scholar
RB & EG contributed to the conceptualisation of the study and methodological design. RB & MN collected the research data. RB, EG, MN, KR contributed to analysis and interpretation of the research data. RB, EG, MN, KR drafted the manuscript. All authors provided feedback on the manuscript and approved the final submitted manuscript.
Correspondence to Rebecca Barry .
Ethics approval and consent to participate.
Ethics approvals were obtained from the Greater Western NSW Human Research Ethics Committee and Charles Sturt University Human Research Ethics Committee (approval numbers: 2021/ETH00088 and H21215). Informed written consent was obtained from all participants. All methods were carried out in accordance with the relevant guidelines and regulations.
Not applicable.
The authors declare no competing interests.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .
Reprints and permissions
Cite this article.
Barry, R., Green, E., Robson, K. et al. Factors critical for the successful delivery of telehealth to rural populations: a descriptive qualitative study. BMC Health Serv Res 24 , 908 (2024). https://doi.org/10.1186/s12913-024-11233-3
Download citation
Received : 19 March 2024
Accepted : 23 June 2024
Published : 07 August 2024
DOI : https://doi.org/10.1186/s12913-024-11233-3
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
ISSN: 1472-6963
You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.
All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.
Original Submission Date Received: .
Find support for a specific problem in the support section of our website.
Please let us know what you think of our products and services.
Visit our dedicated information section to learn more about MDPI.
Research on quantitative analysis methods for the spatial characteristics of traditional villages based on three-dimensional point cloud data: a case study of liukeng village, jiangxi, china.
Li, Z.; Wang, T.; Sun, S. Research on Quantitative Analysis Methods for the Spatial Characteristics of Traditional Villages Based on Three-Dimensional Point Cloud Data: A Case Study of Liukeng Village, Jiangxi, China. Land 2024 , 13 , 1261. https://doi.org/10.3390/land13081261
Li Z, Wang T, Sun S. Research on Quantitative Analysis Methods for the Spatial Characteristics of Traditional Villages Based on Three-Dimensional Point Cloud Data: A Case Study of Liukeng Village, Jiangxi, China. Land . 2024; 13(8):1261. https://doi.org/10.3390/land13081261
Li, Zhe, Tianlian Wang, and Su Sun. 2024. "Research on Quantitative Analysis Methods for the Spatial Characteristics of Traditional Villages Based on Three-Dimensional Point Cloud Data: A Case Study of Liukeng Village, Jiangxi, China" Land 13, no. 8: 1261. https://doi.org/10.3390/land13081261
Article access statistics, further information, mdpi initiatives, follow mdpi.
Subscribe to receive issue release notifications and newsletters from MDPI journals
Deformation and development of landslides are highly complex processes and relying solely on a single monitoring factor is insufficient to accurately assess the entire evolutionary trend of landslide deformation. Typically, multiple sensors should be deployed on a landslide mass to obtain multidimensional monitoring data and comprehensively analyze the landslide development process. Monitoring data obtained through multiple sensors exhibit certain randomness and redundancy. Effective processing of these data is the key to landslide warning systems. However, the deployment of various sensors on landslide masses incurs significant costs, which usually limits their application to key landslide-prone points rather than enabling landslide warning systems over large areas. To construct a low-cost and widely applicable landslide warning model, this study installed two types of conventional monitoring devices on a landslide mass (displacement meters and rain gauges). First, the Saito method was applied to identify the macroscopic deformation stages of the landslide and to calculate the daily average deformation rates at each stage. Subsequently, a five-level warning pattern based on deformation rates was established and the critical thresholds for each warning level was determined. Finally, using daily displacement and rainfall as well as bedrock hardness and slope as the factors, a feature vector set was constructed by associating the warning levels corresponding to the daily average deformation rates at each stage. An integrated machine learning network was employed to develop an intelligent landslide warning model, which enables the intelligent identification of landslide deformation stages and assists in decision-making for landslide warning over large areas.
This is a preview of subscription content, log in via an institution to check access.
Subscribe and save.
Price includes VAT (Russian Federation)
Instant access to the full article PDF.
Rent this article via DeepDyve
Institutional subscriptions
Relevant data supporting the findings of this study are detailed in the manuscript or included in the supplementary files. Additional data used in this analysis are available from the corresponding author upon reasonable request.
Alonso EE, Pinyol NM (2010) Criteria for rapid sliding I. A review of vaiont case. Eng Geol 114(3–4):198–210
Article Google Scholar
Apip TK, Yamashiki Y, Sassa K et al (2010) A distributed hydrological-geotechnical model using satellite-derived rainfall estimates for shallow landslide prediction system at a catchment scale. Landslides 7(3):237–258
Carlà T, Farina P, Intrieri E, Botsialas K, Casagli N (2017) On the monitoring and early-warning of brittle slope failures in hard rock masses: examples from an open-pit mine. Eng Geol 228:71–81
Chae BG, Park HJ, Catani F, Simoni A, Berti M (2017) Landslide prediction, monitoring and early warning: a concise review of state-of-the-art. Geosci J 21:1033–1070
Cui P, Yang K, Chen J (2003) Relationship between occurrence of debris flow and antecedent precipitation: taking the Jiangjia Gully as an example. China J Soil Water Conserv 1(1):11–15 (in Chinese)
Google Scholar
Deng L, Yuan H, Chen J, Zhang M, Su G, Pan R, Meng X (2023) Prefabricated acoustic emission array system for landslide monitoring. Eng Geol 323:107185
Dfuf I, Minayo J, Mcwilliams J, Fernández C (2020) Variable importance analysis in imbalanced datasets: a new approach. IEEE Access 8:127404–127430
Fan X, Xu Q, Alonso-Rodriguez A, Subramanian SS et al (2019a) Successive landsliding and damming of the Jinsha River in eastern Tibet, China: prime investigation, early warning, and emergency response. Landslides 16(5):1003–1020
Fan X, Xu Q, Liu J, Subramanian, He C et al (2019b) Successful early warning and emergency response of a disastrous rockslide in Guizhou province, China. Landslides 16(12):2445–2457
Guilhot D, Hoyo T, Bartoli A, Ramakrishnan P et al (2021) Internet-of-things-based geotechnical monitoring boosted by satellite insar data. Remote Sens 13(14):2757
He KQ, Guo D, Zhang P (2017) The direction ratio of vertical displacement for rainfall-induced landslides and its early warning criterion. Rock Soil Mech 38(12):3649–3659
Hu KH, Ma C (2014) Critical soil moisture for debris flow initiation and its application in forecasting. J Earth Sci Environ 36(2):73–80 (in Chinese)
Huang F, Chen J, Liu W, Huang J, Hong H, Chen W (2022) Regional rainfall-induced landslide hazard warning based on landslide susceptibility mapping and a critical rainfall threshold. Geomorphology 408:108236
Intrieri E, Gigli G, Mugnai F, Fanti R, Casagli N (2012) Design and implementation of a landslide early warning system. Eng Geol 147:124–136
Intrieri E, Gigli G, Casagli N, Nadim F (2013) Brief communication landslide early warning system: toolbox and general concepts. Nat Hazards Earth Syst Sci 13(1):85–90
Intrieri E, Carlà T, Gigli G (2019) Forecasting the time of failure of landslides at slope-scale: a literature review. Earth Sci Rev 193:333–349
Ju N, Huang J, Huang R, He C, Li Y (2015) A real-time monitoring and early warning system for landslides in Southwest China. J Mt Sci 12(5):1219–1228
Leung A, Ng C (2013) Seasonal movement and groundwater flow mechanism in an unsaturated saprolitic hillslope. Landslides 10(4):455–467
Li C, Zhu JB, Wang B, Jiang Y, Liu X et al (2016) Critical deformation velocity of landslides in different deformation phases. Chin J Rock Mechan Eng 35(7):1407–1414 (in Chinese)
CAS Google Scholar
Liu CZ (2021) Three types of displacement-time curves and early warning of landslides [J]. J Eng Geol 29(1):86–95 (in Chinese)
Liu CY, Yin XB, Zhang B (2015) Landslide deformation analysis and prediction based on Kalman filtering data fusion technology. Chin J Geol Hazards Prev 26(4):30–35 (in Chinese)
Liu DL, Tang D, Zhang SJ, Leng XP, Hu KH et al (2021) Method for feature analysis and intelligent recognition of infrasound signals of soil landslides. Bull Eng Geol Environ 80(2):917–932
Lu K, Yu B, Han L, Xie H et al (2021) A study of the relationship between frequency of debris flow and the lithology in the catchment of debris flow. Adv Earth Sci 26(9):980–990
Pecoraro G, Calvello M, Piciullo L (2019) Monitoring strategies for local landslide early warning systems. Landslides 16(2):213–231
Podolszki L, Kosović I, Novosel T, Kurečić T (2021) Multi-level sensing technologies in landslide research—Hrvatska Kostajnica case study, Croatia. Sensors 22(1):177
Qi X (2017) Sudden loess landslide monitoring and early warning research-a case study of Gansu landslide in Heifangtai Loess. Thesis for Degree of Doctor of Chengdu University of Technology (in Chinese)
Qi X, Zhu X, Xu Q, Zhao KY et al (2020) Improvement and application of landslide proximity time prediction method based on Saito Model. J Eng Geol 28(4):832–839 (in Chinese)
Qiao N, Duan YL, Shi XM, Wei XF, Feng JM (2020) Study on the early warning methods of dynamic landslides of large abandoned rockfill slopes. Appl Sci 10:17: 6097
Article CAS Google Scholar
Segalini A, Valletta A, Carri A (2018) Landslide time-of-failure forecast and alert threshold assessment: a generalized criterion. Eng Geol 245:72–80
Song TL (2019) Data analysis and data operation with python. China Mach Press 123–125 (in Chinese)
Sun B (2012) Research on Multi⁃sensor fusion estimation algorithm and its application in landslide. Thesis for Degree of Doctor of China University of Geosciences
Thang N, Wakai A, Sato G, Viet T, Kitamura N (2022) Simple method for shallow landslide prediction based on wide-area terrain analysis incorporated with surface and subsurface flows. Nat Hazards Rev 23(4):04022028
Tiranti D, Rabuffetti D (2010) Estimation of rainfall thresholds triggering shallow landslides for an operational warning system. Landslides 7:471–481
Villalpando F, Tuxpan J, Ramos⁃Leal JA, Carranco-Lozada S (2020) New framework based on fusion information from multiple landslide data sources and 3D visualiza⁃tion. J Earth Sci 31(1):159–168
Wang K, Zhang SJ, Ma J, Yang HJ, Liu DL et al (2022) Study on spatial distribution law and early-warning criterion of landslide displacement stage under big data environment. Adv Earth Sci 37(10):1054–1065
Wang D, Huang G, Du Y, Zhang Q, Bai Z, Tian J (2023a) Stability analysis of reference station and compensation for monitoring stations in GNSS landslide monitoring. Satell Navig 4(3):107–121
Wang K, Zhang S, Xie W, Guan H (2023b) Prediction of the instability probability for rainfall induced landslides: the effect of morphological differences in geomorphology within mapping units. J Mt Sci 20(5):1249–1265
Wu Y, Lu G, Zhu Z, Bai D, Zhu X, Tao C, Li Y (2022) A landslide warning method based on K-Means-ResNet fast classification model. Appl Sci 13(1):459
Xie M, Zhao J, Ju N, He C, Wang J et al (2020) Re⁃search on temporal and spatial evolution of land⁃slide based on multisource data: a case study of Huangnibazi landslide. Geomatics Inform Sci Wuhan Univ 45(6):923–932
Xu L, Coop MR (2016) Influence of structure on the behavior of a saturated clayey loess. Can Geotech J 12(1–12):1–53
Xu H, Liu HZ (2019) Multi-scale rainfall characteristics of rainfall-induced landslides. Mountain Res 37(6):858–867 (in Chinese)
Xu Q, Yuan Y, Zeng YP, Hack R (2011) Some new pre-warning criteria for creep slope failure. Sci China (Technological Sciences) 54(1):210–220
Yan Y, Cui Y, Huang X, Zhou J, Zhang W et al (2022) Combining seismic signal dynamic inversion and numerical modeling improves landslide process reconstruction. Earth Surf Dyn 10(6):1233–1252
Yan Y, Hu S, Zhou K, Jin W, Ma N et al (2023a) Hazard characteristics and causes of the 7.22 2021 debris flow in Shenshuicao Gully, Qilian Mountains, NW China. Landslides 20(1):111–125
Yan Y, Tang H, Hu K, Turowski JM, Wei F et al (2023b) Deriving debris-flow dynamics from real-time impact-force measurements. J Geophys Res Earth Surf 128(3), e2022JF006715.
Yin Y, Wang H, Gao Y, Li X (2010) Real-time monitoring and early warning of landslides at relocated Wushan Town, the Three Gorges Reservoir, China. Landslides 7(3):339–349
Yu XY (2016) Landslide susceptibility evaluation method based on Multi⁃source data and Multi⁃scale analysis. China University of Geosciences, Wuhan
Zaki A, Chai HK, Razak HA, Shiotani T (2014) Monitoring and evaluating the stability of soil slopes: a review on various available methods and feasibility of acoustic emission technique. C.R. Geoscience 346(9–10):223–232
Zhang S, Zhang X, Pei X, Wang S, Huang R et al (2019) Model test study on the hydrological mechanisms and early warning thresholds for loess fill slope failure induced by rainfall. Eng Geol 258:105135
Zhang YH, Li W, Bao S, Haibo h, Long L (2020) Application of an adaptive weighted estimation fusion al⁃gorithm in landslide deformation monitoring data processing. IOP Conference Series Earth Environmental Sci, Beijing, China
Zhang S, Ma Z, Li Y, Hu K, Zhang Q et al (2021) A grid-based physical model to analyze the stability of slope unit. Geomorphology 391:1–12
Zhang S, Xia M, Li L, Yang H, Liu D et al (2023) Quantify the effect of antecedent effective precipitation on rainfall intensity-duration threshold of debris flow. Landslides 20(8):1719–1730
Zhang J, Tang H, Li C, Gong W, Zhou B, Zhang Y (2024) Deformation stage division and early warning of landslides based on the statistical characteristics of landslide kinematic features. Landslides 21(4):717–735
Zhu XX, Zhang L, Shuwen Y (2019) Characteristics of rainfall-induced loess landslides and threshold rainfall in Lanzhou. Chin J Geol Hazard Control 30(4):24–31 (in Chinese)
Download references
This work was supported by National Natural Science Foundation of China (No. 42302336); Project of the Department of Science and Technology of Sichuan Province (No. 2024YFHZ0098; No. 2023NSFSC0751); Geological survey project (No. DD20211364); and Open Project of Chengdu University of Information Technology (KYQN202317; 760115027).
Authors and affiliations.
College of Software Engineering, Chengdu University of Information and Technology, Chengdu, 610225, China
Dunlong Liu, Dan Tang & Xuejia Sang
Software Engineering Technology Research Support Center of Informatization Application of Sichuan, Chengdu, 610225, China
China Geological Environment Monitoring Institute, Beijing, 100081, China
China University of Geosciences, Beijing, 100083, China
Key Laboratory of Mountain Hazards and Earth Surface Process, Institute of Mountain Hazards and Environment, Chinese Academy of Sciences, Chengdu, 610041, China
Shaojie Zhang & Hongjuan Yang
You can also search for this author in PubMed Google Scholar
Correspondence to Juan Ma or Shaojie Zhang .
Compliance with ethical standards.
All the authors declared that they have no conflicts of interest to this work. We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. This article does not contain any studies with animals performed by any of the authors. Informed consent was obtained from all individual participants included in the study.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Below is the link to the electronic supplementary material.
Rights and permissions.
Reprints and permissions
Liu, D., Tang, D., Ma, J. et al. Critical threshold mining of landslide deformation and intelligent early-warning methods based on multi-factor fusion. Bull Eng Geol Environ 83 , 352 (2024). https://doi.org/10.1007/s10064-024-03841-4
Download citation
Received : 27 October 2023
Accepted : 24 July 2024
Published : 10 August 2024
DOI : https://doi.org/10.1007/s10064-024-03841-4
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
COMMENTS
Critical analysis is a process of examining a piece of work or an idea in a systematic, objective, and analytical way. It involves breaking down complex ideas, concepts, or arguments into smaller, more manageable parts to understand them better. ... Research Methods: The research methods are clearly described and appropriate for the research ...
Abstract. Creswell (2014) noted that qualitative research is an approach for exploring and understanding the meaning individuals or groups ascribe to a social or human problem. The article embodies a critical analysis of chapters one to twelve of Stake (2010). In chapter one, Qualitative research: How things work is seen as qualitative, is ...
Critical Analysis: The Often-Missing Step in Conducting Literature Review Research Joan E. Dodgson , PhD, MPH, RN, FAAN [email protected] View all authors and affiliations Volume 37 , Issue 1
To be critical, or to critique, means to evaluate. Therefore, to write critically in an academic analysis means to: judge the quality, significance or worth of the theories, concepts, viewpoints, methodologies, and research results. evaluate in a fair and balanced manner. avoid extreme or emotional language. You evaluate or judge the quality ...
A critical analysis is an argument about a particular piece of media. There are typically two parts: (1) identify and explain the argument the author is making, and (2), provide your own argument about that argument. Your instructor may have very specific requirements on how you are to write your critical analysis, so make sure you read your ...
One of the more famous studies to produce a critical analysis is the doll test first devised by Mamie Clark, then conducted with husband Kenneth Clark starting in the 1940s and replicated in later years. ... Critical methodology advocates for reflexivity and participatory research as a departure from traditional research methods. Critical race ...
Good critical analysis evaluates the ideas or work in a balanced way that highlights its positive and negative qualities." CQ Researcher Online This link opens in a new window Addresses controversial topics in a balanced, unbiased manner, with regular reports on topics in health, international affairs, education, the environment, technology and ...
With analysts adopting and mixing these methods, criticism, critique, critical analysis, and critical inquiry have since been used interchangeably or generalized as "critical method." For example, Marcuse's (1958) Soviet Marxism is subtitled A Critical Analysis but the method, from the first sentence forward, is "immanent critique" (p ...
5. Proofread and refine your work. Read through your critical analysis to ensure it sounds as professional as it should. Correct any spelling and grammatical errors and awkward phrasing when you see it. Reading your critical analysis out loud can help you identify more areas for improvement.
Six key questions will help readers to assess qualitative research #### Summary points Over the past decade, readers of medical journals have gained skills in critically appraising studies to determine whether the results can be trusted and applied to their own practice settings. Criteria have been designed to assess studies that use quantitative methods, and these are now in common use.
By Laura Brown on 29th May 2023. Conducting a critical analysis of a research paper includes the evaluation of its methodology, data sources, and findings. Alongside, it is necessary to assess the paper's strengths and weaknesses, identify any biases or limitations, and examine its contribution to the respective field.
Allen (2017) mentioned that critical analysis referred to as. critical discourse analysis or critical discourse studies, is. an approach to research investigating the relationship. between ...
3. Determine how well the author defines concepts in the text. Another way to approach your analysis is to consider how well the author has defined concepts in the text. If the concepts are poorly or inadequately defined, this will provide you with an easy way to critique the text.
Written by MasterClass. Last updated: Jun 7, 2021 • 3 min read. Critical analysis essays can be a daunting form of academic writing, but crafting a good critical analysis paper can be straightforward if you have the right approach. Explore.
Critical discourse analysis (or discourse analysis) is a research method for studying written or spoken language in relation to its social context. It aims to understand how language is used in real life situations. When you conduct discourse analysis, you might focus on: The purposes and effects of different types of language.
Critical Analysis Examples. 1. Exploring Strengths and Weaknesses. Perhaps the first and most straightforward method of critical analysis is to create a simple strengths-vs-weaknesses comparison. Most things have both strengths and weaknesses - you could even do this for yourself!
Critical Analysis. Critical analysis positions language as "social practice" and takes seriously the historical and socio-political contexts in which texts are produced. Critical analysis, also referred to as critical discourse analysis or critical discourse studies, is an approach to research that investigates the relationship between ...
specific research method, statement, or value. It draws on the necessity for describing, interpreting, ... Critical analysis reveals what is going on behind our backs and those of others and which determines our actions. It does not argue for or against the validity and truth of a certain research method, statement, or values, but focuses ...
Qualitative evidence allows researchers to analyse human experience and provides useful exploratory insights into experiential matters and meaning, often explaining the 'how' and 'why'. As we have argued previously1, qualitative research has an important place within evidence-based healthcare, contributing to among other things policy on patient safety,2 prescribing,3 4 and ...
Discussions of the benefits and drawbacks of various research methods, rather than general research questions, are recommended. Download Free PDF View PDF. ... practices and toolkits. A critical analysis and synthesis of previously published materials are also employed, with the view of understanding the true distinction between quantitative ...
Abstract. Critical discourse analysis (CDA) is a social scientific theory and method for analyzing and critiquing the use of language and its contribution to forming and sustaining social practice and for analysis of how language can contribute to reproducing or transforming social problems. CDA adopts the position that the analysis of how ...
PDF | On Jan 1, 2009, M Reisigl and others published Methods of critical discourse analysis: Introducing qualitative methods | Find, read and cite all the research you need on ResearchGate
Abstract and Figures. A critique is the written analysis of the subject. Critical analysis skills are needed to write an effective critique, but it is also necessary to be able to communicate the ...
Discourse analysis is a valuable research methodology that can be used in a variety of contexts. Here are some situations where discourse analysis may be particularly useful: When studying language use in a particular context: Discourse analysis can be used to examine how language is used in a specific context, such as political speeches, media ...
The extent to which, however, such resources furnish explicitly racialized interpretations of place often determines how educators and learners can use them in social studies contexts. In this critical race discourse analysis, we examined the function of race in Wacohistory.org, a local history website comprised of 200 entries.
A subgroup analysis was performed grouping models and frameworks according to their levels of use (clinical, organizational, and policy) and type of health service (community, ambulatorial, hospital, institutional). The JBI's critical appraisal tools were utilized by two independent researchers to assess the trustworthiness, relevance, and ...
This research study adopted a qualitative descriptive design as described by Sandelowski [].The purpose of a descriptive study is to document and describe a phenomenon of interest [] and this method is useful when researchers seek to understand who was involved, what occurred, and the location of the phenomena of interest [].The phenomenon of interest for this research was the provision of ...
The research concept and methodology were developed through collaboration among all authors. Gratien Twagirayezu, Hongguang Cheng, Olivier Irumva, and Jean Claude Nizeyimana conducted the data collection and analysis. ... A critical review and analysis of plastic waste management practices in Rwanda. Environ Sci Pollut Res (2024). https: //doi ...
Traditional villages are important carriers of cultural heritage, and the quantitative study of their spatial characteristics is an important approach to their preservation. However, the rapid extraction, statistics, and estimation of the rich spatial characteristic indicators in these villages have become bottlenecks in traditional village research. This paper employs UAV (unmanned aerial ...
To facilitate a comparative analysis of the early warning effect of the intelligent early warning model, the early warning results of the deformation rate threshold method, of displacement and of hourly rainfall data within the monitoring time range were processed and are plotted in Fig. 10. Because Warning Level 1 indicates no warning, it was ...