p-sig. (exact)
TOT_PRE, PENCRISAL pre-test; RD_PRE, Deductive reasoning pre-test; RI_PRE, Inductive reasoning pre-test; RP_PRE, Practical reasoning pre-test; TD_PRE, Decision making pre-test; SP_PRE, Problem solving pre-test; TOT_POST, PENCRISAL post-test; RD_ POST, Deductive reasoning post-test; RI_ POST, Inductive reasoning post-test; RP_ POST, Practical reasoning post-test; TD_ POST, Decision making post-test; SP_ POST, Problem solving post-test; Min, minimum, Max, maximum, Asym, asymmetry; and Kurt, kurtosis.
Description of metacognition measurement (MAI).
Variables | Min. | Max. | Media | Asym. | Kurt. | K-S p-sig (exact) | ||
---|---|---|---|---|---|---|---|---|
TOT_MAI_PRE | 89 | 145 | 233 | 192.13 | 16.636 | −0.071 | 0.275 | 0.557 |
Decla_PRE | 89 | 22 | 37 | 30.58 | 3.391 | −0.594 | −0.152 | 0.055 |
Proce_PRE | 89 | 9 | 19 | 14.52 | 2.018 | −0.560 | 0.372 | 0.004 |
Condi_PRE | 89 | 8 | 23 | 18.04 | 3.003 | −0.775 | 0.853 | 0.013 |
CONO_PRE | 89 | 44 | 77 | 63.15 | 6.343 | −0.384 | 0.044 | 0.445 |
Plani_PRE | 89 | 10 | 31 | 24.35 | 4.073 | −0.827 | 0.988 | 0.008 |
Orga_PRE | 89 | 26 | 48 | 38.20 | 4.085 | −0.307 | 0.331 | 0.022 |
Moni_PRE | 89 | 15 | 35 | 25.24 | 3.760 | −0.436 | 0.190 | 0.005 |
Depu_PRE | 89 | 14 | 25 | 20.71 | 2.144 | −0.509 | 0.310 | 0.004 |
Eva_PRE | 89 | 12 | 28 | 20.49 | 3.310 | −0.178 | −0.044 | 0.176 |
REGU_PRE | 89 | 97 | 160 | 128.99 | 12.489 | −0.070 | 0.043 | 0.780 |
OT_MAI_POST | 89 | 138 | 250 | 197.65 | 17.276 | −0.179 | 0.969 | 0.495 |
Decla_POST | 89 | 23 | 39 | 31.21 | 3.492 | −0.407 | 0.305 | 0.020 |
Proce_POST | 89 | 8 | 20 | 15.24 | 2.116 | −0.723 | 0.882 | 0.001 |
Condi_POST | 89 | 0 | 24 | 18.85 | 2.874 | −0.743 | 0.490 | 0.029 |
CONO_ POST | 89 | 44 | 82 | 65.30 | 6.639 | −0.610 | 1.014 | 0.153 |
Plani_ POST | 89 | 12 | 33 | 25.51 | 3.659 | −0.539 | 0.994 | 0.107 |
Orga_ POST | 89 | 27 | 48 | 39.40 | 4.150 | −0.411 | 0.053 | 0.325 |
Moni_ POST | 89 | 17 | 35 | 26.44 | 3.296 | −0.277 | 0.421 | 0.143 |
Depu_ POST | 89 | 15 | 24 | 20.40 | 2.245 | −0.214 | −0.531 | 0.023 |
Eva_ POST | 89 | 12 | 29 | 20.60 | 3.680 | −0.083 | −0.098 | 0.121 |
REGU_PRE | 89 | 94 | 168 | 132.35 | 12.973 | −0.227 | 0.165 | 0.397 |
TOT_MAI_PRE, MAI pre-test; Decla_PRE, Declarative pre-test; Proce_PRE, Procedural pre-test; Condi_PRE, Conditional pre-test; CONO_PRE, Knowledge pre-test; Plani_PRE, Planning pre-test; Orga_PRE, Organization pre-test; Moni_PRE, Monitoring pre-test; Depu_PRE, Troubleshooting pre-test; Eva_PRE, Evaluation pre-test; REGU_PRE, Regulation pre-test; TOT_MAI_POST, MAI post-test; Decla_ POST, Declarative post-test; Proce_ POST, Procedural post-test; Condi_ POST, Conditional post-test; CONO_ POST, Knowledge post-test; Plani_ POST, Planning post-test; Orga_POST, Organization post-test; Moni_ POST, Monitoring post-test; Depu_ POST, Troubleshooting post-test; Eva_ POST, Evaluation post-test; and REGU_ POST, Regulation post-test;
As we see in the description of all study variables, the evidence is that the majority of them adequately fit the normal model, although some present significant deviations which can be explained by sample size.
Next, to verify whether there were significant differences in the metacognition variable based on measurements before and after the intervention, we contrasted medians for samples related with Student’s t -test (see Table 3 ).
Comparison of the METAKNOWLEDGE variable as a function of PRE-POST measurements.
Variables | Mean Difference (CI 95%) | value | gl. | p-sig. (bilateral) | ||||
---|---|---|---|---|---|---|---|---|
TOT_MAI | Pre. | 89 | 192.13 | 16.636 | −8.152_−2.882 | −4.161 | 88 | 0.000 |
Post. | 89 | 197.65 | 17.276 | |||||
Decla | Pre. | 89 | 30.58 | 3.391 | −1.235_−0.023 | −2.063 | 88 | 0.042 |
Post. | 89 | 31.21 | 3.492 | |||||
Proce | Pre. | 89 | 14.52 | 2.018 | −1.210_−0.228 | −2.911 | 88 | 0.005 |
Post. | 89 | 15.24 | 2.116 | |||||
Condi. | Pre. | 89 | 18.04 | 3.003 | −1.416_−0.202 | −2.65 | 88 | 0.010 |
Post. | 89 | 18.85 | 2.874 | |||||
CONO | Pre. | 89 | 63.15 | 6.343 | −3.289_−1.025 | −3.787 | 88 | 0.000 |
Post. | 89 | 65.3 | 6.639 | |||||
Plan | Pre. | 89 | 24.35 | 4.073 | −1.742_−0.573 | −3.934 | 88 | 0.000 |
Post. | 89 | 25.51 | 3.659 | |||||
Orga | Pre. | 89 | 38.2 | 4.085 | −2.054_−0.350 | −2.803 | 88 | 0.006 |
Post. | 89 | 39.4 | 4.15 | |||||
Moni | Pre. | 89 | 25.24 | 3.76 | −1.924_−0.480 | −3.308 | 88 | 0.001 |
Post. | 89 | 26.44 | 3.296 | |||||
TS | Pre. | 89 | 20.71 | 2.144 | −0.159_−0.766 | 1.303 | 88 | 0.196 |
Post. | 89 | 20.4 | 2.245 | |||||
Eval | Pre. | 89 | 20.49 | 3.31 | −0.815_−0.613 | −0.282 | 88 | 0.779 |
Post. | 89 | 20.6 | 3.68 | |||||
REGU | Pre. | 89 | 128.99 | 12.489 | −5.364_−1.356 | −3.331 | 88 | 0.001 |
Post. | 89 | 132.35 | 12.973 |
The results show that there are significant differences in the metaknowledge scale total and in most of its dimensions, where all the post medians for both the scale overall and for the three dimensions of the knowledge factor (declarative, procedural, and conditional) are higher than the pre-medians. However, in the cognition regulation dimension, there are only significant differences in the total and in the planning, organization, and monitoring dimensions. The medians are also greater in the post-test than the pre-test. However, the troubleshooting and evaluation dimensions do not differ significantly after intervention.
Finally, for critical thinking skills, the results show significant differences in the scale total and in the five factors regarding the measurement time, where performance medians rise after intervention (see Table 4 ).
Comparison of the CRITICAL THINKING variable as a function of PRE-POST measurements.
Variables | N | M | SD | Student’s -test | ||||
---|---|---|---|---|---|---|---|---|
Mean difference (CI 95%) | value | gl. | p-sig. (bilateral) | |||||
TOT | Pre. | 89 | 25.146 | 5.436 | −8.720_−6.246 | −12.023 | 88 | 0.000 |
Post. | 89 | 32.629 | 5.763 | |||||
RD | Pre. | 89 | 2.978 | 3.391 | −2.298_−1.364 | −7.794 | 88 | 0.000 |
Post. | 89 | 4.809 | 3.492 | |||||
RI | Pre. | 89 | 4.213 | 1.627 | −1.608_−0.706 | −5.097 | 88 | 0.000 |
Post. | 89 | 5.371 | 1.547 | |||||
RP | Pre. | 89 | 18.04 | 2.248 | −1.416_−0.202 | −10.027 | 88 | 0.000 |
Post. | 89 | 18.85 | 2.295 | |||||
TD | Pre. | 89 | 63.15 | 1.796 | −3.083_−2.063 | −6.54 | 88 | 0.000 |
Post. | 89 | 65.3 | 1.748 | |||||
SP | Pre. | 89 | 24.35 | 2.058 | −1.135_−0.213 | −2.906 | 88 | 0.005 |
Post. | 89 | 25.51 | 1.812 |
These results show how metacognition improves due to CT intervention, as well as how critical thinking also improves with metacognitive intervention and CT skills intervention. Thus, it improves how people think about thinking as well as about the results achieved, since metacognition supports decision-making and final evaluation about proper strategies to solve problems.
The general aim of our study was to know whether a critical thinking intervention program can also influence metacognitive processes. We know that our teaching methodology improves cross-sectional skills in argumentation, explanation, decision-making, and problem-solving, but we do not know if this intervention also directly or indirectly influences metacognition. In our study, we sought to shed light on this little-known point. If we bear in mind the centrality of how we think about thinking for our cognitive machinery to function properly and reach the best results possible in the problems we face, it is hard to understand the lack of attention given to this theme in other research. Our study aimed to remedy this deficiency somewhat.
As said in the introduction, metacognition has to do with consciousness, planning, and regulation of our activities. These mechanisms, as understood by many authors, have a blended cognitive and non-cognitive nature, which is a conceptual imprecision; what is known, though, is the enormous influence they exert on fundamental thinking processes. However, there is a large knowledge gap about the factors which make metacognition itself improve. This second research lacuna is what we have partly aimed to shrink here as well with this study. Our guide has been the idea of knowing how to improve metacognition from a teaching initiative and from the improvement of fundamental critical thinking skills.
Our study has shed light in both directions, albeit in a modest way, since its design does not allow us to unequivocally discern some of the results obtained. However, we believe that the data provide relevant information to know more about existing relations between skills and metacognition, something which has seen little contrast. These results allow us to better describe these relations, guiding the design of future studies which can better discern their roles. Our data have shown that this relation is bidirectional, so that metacognition improves thinking skills and vice versa. It remains to establish a sequence of independent factors to avoid this confusion, something which the present study has aided with to be able to design future research in this area.
As the results show, total differences in almost all metaknowledge dimensions are higher after intervention; specifically, we see how in the knowledge factor the declarative, procedural, and conditional dimensions improve in post-measurements. This improvement moves in the direction we predicted. However, the cognitive regulation dimension only shows differences in the total, and in the planning, organization, and regulation dimensions. We can see how the declarative knowledge dimensions are more sensitive than the procedural ones to change, and within the latter, the dimensions over which we have more control are also more sensitive. With troubleshooting and evaluation, no changes are seen after intervention. We may interpret this lack of effects as being due to how everything referring to evaluating results is highly determined by calibration capacity, which is influenced by personality factors not considered in our study. Regarding critical thinking, we found differences in all its dimensions, with higher scores following intervention. We can tentatively state that this improved performance can be influenced not only by interventions, but also by the metacognitive improvement observed, although our study was incapable of separating these two factors, and merely established their relation.
As we know, when people think about thinking they can always increase their critical thinking performance. Being conscious of the mechanisms used in problem-solving and decision-making always contributes to improving their execution. However, we need to go into other topics to identify the specific determinants of these effects. Does performance improve because skills are metacognitively benefited? If so, how? Is it only the levels of consciousness which aid in regulating and planning execution, or do other factors also have to participate? What level of thinking skills can be beneficial for metacognition? At what skill level does this metacognitive change happen? And finally, we know that teaching is always metacognitive to the extent that it helps us know how to proceed with sufficient clarity, but does performance level modify consciousness or regulation level of our action? Do bad results paralyze metacognitive activity while good ones stimulate it? Ultimately, all of these open questions are the future implications which our current study has suggested. We believe them to be exciting and necessary challenges, which must be faced sooner rather than later. Finally, we cannot forget the implications derived from specific metacognitive instruction, as presented at the start of this study. An intervention of this type should also help us partially answer the aforementioned questions, as we cannot obviate what can be modified or changed by direct metacognition instruction.
Ethics statement.
Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The patients/participants provided their written informed consent to participate in this study.
SR and CS contributed to the conception and design of the study. SR organized the database, performed the statistical analysis, and wrote the first draft of the manuscript. SR, CS, and CO wrote sections of the manuscript. All authors contributed to the article and approved the submitted version.
This study was partly financed by the Project FONDECYT no. 11220056 ANID-Chile.
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Developing a higher level of critical thinking can create a comprehensive risk story and properly direct mitigations throughout your organization.
Since the FDA published its report, “Guidance for Industry: Oversight of Clinical Investigations—A Risk-Based Approach to Monitoring” in 2011, risk-based monitoring (RBM) has become a guidance document that lays out the FDA’s expectations. Later on, the European Medicines Agency (EMA) adopted ICH E6 (R2) and made it a requirement.
Although robust centralized monitoring (CM) ensures risk-based quality monitoring (RBQM) success, regulatory agencies often provide a general framework to this process without specifying exactly how it should be done. Due to the complexity and a lack of guidance within RBM, it’s crucial that CM teams develop critical-thinking skills to effectively run these operations. Critical thinking helps CM teams understand root causes, reduce the rate of false positive signals, determine next steps in an investigation and recommend risk mitigations.
While plenty of skill-based CM trainings exist for clinical trials, Good Clinical Practice (GCP), regulation and data management, critical-thinking education has yet to arrive and require more complex approach involving variety of formats and activities. Through our research, we found that the ability to develop critical thinking is not only possible but can help pharma companies in being more efficient in their overall RBM strategy, from more accurate decoding of the risk signals and mitigation actions up to the continuous improvements, and improved retention ability.
Critical thinking has many definitions and interpretations, which makes it hard to grasp. Benjamin Bloom’s taxonomy model offers a practical approach to understanding where and how critical thinking occurs. The model provides a hierarchy for achieving a higher order of thinking through knowledge, comprehension, application, analysis, synthesis and evaluation.
What Can ClinOps Learn from Pre-Clinical?
Dr. Hanne Bak, Senior Vice President of Preclinical Manufacturing and Process Development at Regeneron speaks about her role at the company as well as their work with monoclonal antibodies, the regulatory side of manufacturing, and more.
FDA and CluePoints Extend Existing Collaboration to Include AI/ML for Quality Assessment
Under the new extension, FDA and CluePoints will enhance CluePoints’ SMART software to address a broader range of regulatory concerns.
Innovating Beyond the Lab: The Critical Role of Contracting in Research
Biotechs can successfully overcome bottlenecks in time-to-market for new drugs by embracing contracting innovations with the same passion applied to research breakthroughs.
Challenges in Conducting Multicentric Infectious Giardia Diarrhea Clinical Studies in India
Multicentric studies on diarrhea are essential for comprehending its epidemiology, causes, and treatment outcomes.
Tips to Rescue a Clinical Trial Before It’s Too Late
Why constant communication and transparency are paramount to successful partnerships between pharmaceutical companies and CROs
2 Commerce Drive Cranbury, NJ 08512
609-716-7777
Looking for an introduction to monitoring and evaluation (M&E)? This guide covers the basics of M&E, including key concepts, definitions, steps for designing a plan, and tools and methods for data collection and analysis. Learn how to interpret and report M&E findings, and how to utilize M&E results for program improvement and decision-making. Whether you’re new to M&E or looking to refresh your knowledge, this guide is a valuable resource for anyone involved in program evaluation and performance measurement.
Table of Contents
Enhance your resume to secure more job interviews: Try Our FREE Resume Scanner!
Optimize your resume for ATS with formatting, keywords, and quantified experience.
Monitoring and evaluation (M&E) is a critical process for assessing the performance and effectiveness of programs, projects, and policies. This process involves collecting and analyzing data on program activities, outputs, outcomes, and impact to determine whether the desired results have been achieved.
In today’s complex and dynamic development landscape, the importance of Monitoring and Evaluation (M&E) is more crucial than ever. It allows organizations to measure progress towards their goals, identify areas of improvement, and make evidence-based decisions to improve program outcomes.
Monitoring and evaluation (M&E) can also provide valuable information for accountability and transparency. Donors, funders, and other stakeholders expect organizations to be accountable for the resources they receive and demonstrate the impact of their interventions. M&E helps organizations demonstrate the effectiveness of their programs, build trust with stakeholders , and secure future funding.
M&E is essential for ensuring that programs are effective, efficient, and accountable. By monitoring and evaluating program performance, organizations can identify successes and challenges and make informed decisions to improve program outcomes and impact.
To understand monitoring and evaluation (M&E) effectively, it is essential to be familiar with some of the key concepts and definitions used in this field. Here are some of the essential terms:
Understanding these key concepts and definitions is crucial for developing effective M&E plans and implementing successful programs.
Catch HR’s Eye Instantly:
Premier global development resume service since 2012
Stand Out with a Pro Resume
Designing a monitoring and evaluation (M&E) plan involves several steps and strategies to ensure that the plan is effective in measuring program performance, identifying areas for improvement, and making evidence-based decisions. Here are some of the key steps and strategies:
By following these steps and strategies, organizations can develop a comprehensive M&E plan that will help them measure program performance, identify areas of improvement, and make informed decisions.
In monitoring and evaluation (M&E), data collection and analysis are critical components that help organizations measure program performance, identify areas for improvement, and make evidence-based decisions. Here are some of the tools and methods commonly used for data collection and analysis in M&E:
Data Collection Tools:
Data Analysis Methods:
By using these tools and methods for data collection and analysis, organizations can gain insights into program performance and make evidence-based decisions to improve program outcomes. It is essential to select the appropriate tools and methods based on the type of data being collected and the research questions.
Once data has been collected and analyzed in monitoring and evaluation (M&E), it is necessary to interpret and report the findings. This process involves making sense of the data and presenting it in a way that is clear, concise, and actionable. Here are some key considerations when interpreting and reporting M&E findings:
Reporting M&E findings is an essential component of the M&E process, as it helps to communicate program performance and guide program improvement. By following these key considerations, organizations can ensure that M&E findings are meaningful, relevant, and actionable.
The purpose of monitoring and evaluation (M&E) is to collect and analyze data on program performance, and to use this information to make evidence-based decisions and improve program outcomes. Here are some ways in which M&E results can be utilized for program improvement and decision-making:
By utilizing M&E results for program improvement and decision-making, organizations can improve program outcomes and ensure that resources are being used effectively. It is essential to make M&E an integral part of program design and implementation to ensure that data is being collected and analyzed on an ongoing basis.
These case studies provide a detailed perspective based on evaluation for each context, emphasizing the specific areas of impact and improvement addressed through Monitoring and Evaluation (M&E) efforts.
Basis of Evaluation: Efficiency and Service Delivery Improvement
Context: In a rural region with limited access to healthcare, a nonprofit organization aimed to improve the efficiency of a local health clinic. The clinic served a large population, but long wait times and resource allocation issues were prevalent.
M&E Approach: The evaluation focused on evaluating the clinic’s efficiency and its impact on service delivery. Key evaluation components included:
Outcome: M&E data revealed significant improvements in clinic efficiency, marked by a 30% reduction in patient wait times. Patient satisfaction scores increased notably, indicating improved service quality. The clinic’s ability to serve a larger number of patients also improved, showcasing the positive impact of M&E on service delivery and efficiency.
Basis of Evaluation: Educational Outcome Improvement
Context: A local education authority implemented a literacy program in primary schools to enhance student performance in reading and comprehension.
M&E Approach: The evaluation was primarily concerned with assessing the program’s impact on educational outcomes. Key evaluation components included:
Outcome: M&E findings indicated a significant increase in student reading proficiency, with 80% of students demonstrating improvement. Classroom observations revealed that innovative teaching methods were effective, and teacher feedback led to further refinements in the program. This case study emphasized the role of M&E in improving educational outcomes.
Basis of Evaluation: Conservation and Biodiversity Preservation
Context: A conservation organization initiated a project to protect a threatened wildlife habitat from illegal logging and habitat degradation.
M&E Approach: The evaluation focused on assessing the impact of conservation efforts on the environment and biodiversity. Key evaluation components included:
Outcome: M&E data showed a stable or increasing population of threatened species and a significant reduction in illegal logging activities. The engagement of local communities contributed to the project’s success. This case study highlighted the role of M&E in conservation and community engagement.
Basis of Evaluation: Effective Humanitarian Assistance
Context: In the aftermath of a natural disaster, an international humanitarian organization provided emergency relief to affected communities.
M&E Approach: The evaluation aimed to assess the effectiveness of humanitarian aid distribution. Key evaluation components included:
Outcome: M&E data indicated efficient and targeted aid distribution, with relief items reaching those most in need. Shelter conditions improved over time, and beneficiary feedback led to adjustments in the aid distribution process. This case study highlighted the role of M&E in ensuring the effective delivery of humanitarian assistance.
Basis of Evaluation: Policy Impact and Economic Growth
Context: A government implemented a policy to promote renewable energy adoption and reduce carbon emissions.
M&E Approach: The evaluation focused on assessing the impact of the policy on specific policy objectives. Key evaluation components included:
Outcome: M&E data demonstrated a substantial increase in renewable energy production, a decrease in carbon emissions, and positive economic impacts. This case study underscored the role of M&E in assessing policy effectiveness and its economic consequences.
Monitoring and Evaluation (M&E) are not mere bureaucratic procedures but powerful tools that enable organizations and governments to navigate the complex landscape of projects, programs, and policies. Through systematic data collection, analysis, and interpretation, M&E sheds light on the effectiveness and impact of initiatives across diverse sectors. In our exploration of the basics of M&E, we’ve uncovered its transformative potential through real-world case studies.
M&E as the Catalyst for Improvement
In Case Study 1, we witnessed how M&E can enhance the efficiency of healthcare delivery, reducing wait times and improving patient satisfaction. M&E serves as a catalyst for identifying bottlenecks, optimizing resource allocation, and ultimately enhancing service delivery in the healthcare sector.
Empowering Education through Data
Case Study 2 highlighted the power of M&E in the field of education. By measuring changes in student performance and assessing teaching methods, M&E helps educational authorities fine-tune programs, ultimately empowering students with improved literacy and comprehension skills.
Conservation and Community Engagement
In Case Study 3, we observed the critical role of M&E in environmental conservation. M&E enables organizations to track changes in biodiversity, identify threats like illegal logging, and engage local communities in conservation efforts. This holistic approach underscores the importance of community involvement in environmental initiatives.
Effective Humanitarian Aid
Case Study 4 demonstrated how M&E ensures the efficient distribution of humanitarian aid. By tracking aid distribution, assessing beneficiary needs, and improving shelter quality, M&E plays a pivotal role in delivering timely and targeted assistance during crises.
Informing Evidence-Based Policy
Lastly, Case Study 5 exemplified the impact of M&E on policy evaluation. By monitoring renewable energy production, emissions reductions, and economic growth, M&E supports data-driven policymaking and helps governments achieve their objectives.
A Roadmap to Success
In conclusion, Monitoring and Evaluation serve as a roadmap to success, guiding organizations and policymakers toward evidence-based decisions and meaningful improvements. Through the careful collection and analysis of data, M&E empowers stakeholders to adapt, innovate, and achieve sustainable development goals.
As you embark on your journey into the world of Monitoring and Evaluation, remember that these tools are not just a means to an end; they are the cornerstone of informed progress. By harnessing the power of M&E, we can build a future where every initiative, whether in healthcare, education, conservation, humanitarian aid, or policymaking, is driven by data, enriched by insights, and dedicated to positive change.
With the basics of M&E under your belt and a commitment to its principles, you are well-equipped to embark on the path of evidence-driven impact and make a difference in the world.
Useful and concise information for newbies and to recap
The introduction is very clear as it summarized all what entails in M&E concept
Very quick to assimilate
I like the content
This was really marvelous, it has really be of help to me for course preparation
Glad for the information and really desirous to be part of the Evaluation community. Thank you. Daniel Ayogo – Paramedic(clinician).
very useful and concise information
Hey EvalCommunity readers,
Did you know that you can enhance your visibility by actively engaging in discussions within the EvalCommunity? Every week, we highlight the most active commenters and authors in our newsletter , which is distributed to an impressive audience of over 1.289,000 monthly readers and practitioners in International Development , Monitoring and Evaluation (M&E), and related fields.
Seize this opportunity to share your invaluable insights and make a substantial contribution to our community. Begin sharing your thoughts below to establish a lasting presence and wield influence within our growing community.
Can I know who the writer and year of publication is?
Your email address will not be published.
Only 2% of resumes land interviews.
College of education: open-rank, evaluation/social research methods — educational psychology.
Climate finance specialist, call for consultancy: evaluation of dfpa projects in kenya, uganda and ethiopia.
Budget and billing consultant, manager ii, budget and billing, usaid/lac office of regional sustainable development – program analyst, team leader, senior finance and administrative manager, data scientist.
Services you might be interested in, useful guides ....
How to Create a Strong Resume
Monitoring And Evaluation Specialist Resume
Resume Length for the International Development Sector
Types of Evaluation
Monitoring, Evaluation, Accountability, and Learning (MEAL)
Sign Up & To Get My Free Referral Toolkit Now:
IOM is considered an efficient organization with extensive field presence, implementing its many interventions through a large and decentralized network of regional offices and country offices. 1 IOM puts a strong focus on results-based management ( RBM ), which is promoted to strengthen organizational effectiveness and move towards evidence-based and results-focused programming.
A results-based approach requires robust monitoring and evaluation ( M&E ) systems that provide government officials, IOM staff, partners, donors and civil society with better means to the following:
M&E, at times, may seem challenging in the context of IOM’s interventions, where project duration may not be “long enough” to incorporate strong M&E, or where security, time pressure, funding and/or capacity constraints may hinder the rigorous implementation of M&E. For the same reasons, the benefits of M&E may go unrecognized already in the proposal writing stage, resulting in insufficient attention given to it. The IOM Monitoring and Evaluation Guidelines is a good opportunity to correct those impressions and put M&E at the centre of sound performance and fulfilling the duty of accountability.
As IOM’s global role in addressing migration-related challenges has diversified and expanded, new political and organizational realities have demanded a different conceptualization of M&E, as well as reframed organizational thinking about what it constitutes and its application. These realities include the numerous operational demands, limited resources, accelerated speed of expected response and immediate visibility for impact and accountability, as well as the expected rapid integration of new organizational concepts, such as “value for money” and Theory of Change into daily work. Learning and information-sharing also channel a number of key messages and recommendations to be considered.
IOM’s internal and external environments have also undergone significant changes in recent years, with an increased focus on migration worldwide. As a United Nations-related agency, IOM is a main reference on migration, supporting the attainment of migration-related commitments of the 2030 Agenda for Sustainable Development (Sustainable Development Goals or SDGs) and contributing to the implementation of the Global Compact for Safe, Orderly and Regular Migration. IOM is also an increasingly important contributor to migration data and analysis on a global scale, including for the implementation of the 2030 Agenda, and is praised for its operational and pragmatic approach to managing migration, in line with its mandate and the Migration Governance Framework ( MiGOF ). Furthermore, IOM is internally guided by the Strategic Vision, which does not supersede IOM’s existing MIGOF. But while MIGOF sets out a set of objectives and principles, it does not set out a focused direction of travel. The Strategic Vision is intended to do this. The Strategic Vision also intends to strengthen IOM’s capacity to contribute to the SDGs or the Global Compact for Migration, as well as other existing cooperative frameworks. This chapter will provide an overview of both monitoring and evaluation as key components and an overview of RBM at IOM; it will also outline the differences between monitoring and evaluation and explain how M&E together are relevant to IOM’s strategic approach and objectives.
Over the last 15 years, international actors have increasingly shifted to RBM . RBM supports better performance and greater accountability by applying a clear plan to manage and measure an intervention, with a focus on the results to be achieved. 3 By identifying, in advance, the intended results of an intervention and how its progress can be measured, managing an intervention and determining whether a difference has genuinely been made for the people concerned becomes better understood and easier to implement.
At IOM , RBM is defined as a management strategy that sets out clear objectives and outcomes to define the way forward, and uses specific indicators to verify the progress made. RBM encompasses the whole project cycle: planning, managing implementation, monitoring, reporting and evaluation. 4
The aim of RBM is to provide valuable information for decision-making and lessons learned for the future, which includes the following:
• Planning, setting the vision and defining a results framework;
• Implementing interventions to achieve the results;
• Monitoring to ensure results are being achieved;
• Encouraging learning through reporting and evaluation.
Among other aspects, an RBM approach requires strong M&E , as well as knowledge management.
In 2011, IOM adopted a conscious RBM approach at the project level as seen in the first edition of the IOM Project Handbook . The 2017 version of the IOM Project Handbook provides yet more detailed guidance on RBM and has made the use of a results matrix a requirement to improve IOM’s work. 5
At a corporate level, IOM has identified a set of global results that it wants to achieve by 2023, using its MiGOF as the basis for the Organization’s work and the Strategic Vision as a “direction of travel”. This is condensed in the Strategic Results Framework ( SRF ). This framework specifies the highest level of desired change IOM would like to achieve. The RBM approach builds a bridge between the framework and IOM’s traditional programmes. This allows IOM to report on the results it has collectively achieved, rather than on the activities performed.
Monitoring and evaluation are important parts of RBM , based on clearly defined and measurable results, processes, methodologies and tools to achieve results. M&E can be viewed as providing a set of tools to enable RBM, helping decision makers track progress and demonstrate an intervention’s higher-level results. 6 Results-based M&E moves from a focus on the immediate results, such as the successful implementation of activities and production of outputs, to the higher-level results, looking at the achievement of outcomes and impacts. Figure 1.1 shows RBM as a “life cycle approach” within which M&E are incorporated.
Source : Adapted from United Nations Development Programme, 2009 , p. 10.
A management strategy that sets out clear objectives and outcomes to define the way forward, and uses specific indicators to verify the progress made. is seen as taking a life cycle approach, including planning, managing, monitoring, reporting and evaluating. | RBM at IOM is a means to further strengthen IOM’s interventions. RBM encourages project developers and managers to clearly articulate an intervention’s objective, the desired change it aims to achieve, what is required to achieve such change, whether the desired change is achieved and how ongoing or future performance can further improve through learning. | In essence, M&E supports RBM through monitoring and measuring intervention progress towards predetermined targets, refining implementation, and evaluating changes and results to further improve future interventions. |
IOM resources
Other resources
Kusek, J.Z. and R. Rist
Organisation for Economic Co-operation and Development ( OECD )
United Nations Development Group ( UNDG )
United Nations Development Programme ( UNDP )
United Nations Evaluation Group ( UNEG )
Given IOM ’s broad thematic portfolio and the decentralized nature of the Organization, it is important, when implementing an intervention, to provide justification for the implementation, articulate what changes are expected to occur and, moreover, how these are expected to occur. Monitoring helps do just that.
Monitoring can often be confused with reporting, which is one of the components of monitoring. While reporting only refers to the compilation, transfer and distribution of information, monitoring focuses on the collection and analysis, on a regular basis, of the information required for reporting. Therefore, monitoring encompasses the planning , designing , selecting of methods and systematic gathering and analysis of the content , while reporting summarizes that content with the purpose of delivering the relevant information.
IOM defines monitoring as an established practice of internal oversight that provides management with an early indication of progress, or lack thereof, in the achievement of results, in both operational and financial activities. 7 Monitoring can take various shapes, vary in the frequency of its conduct and be tailored to a specific context, which is usually dependent on the intervention’s objectives. In an IOM intervention, there are four key areas for monitoring: activity monitoring, results monitoring, financial monitoring and risk monitoring. 8
Figure 1.2. Scope of monitoring – Four key monitoring areas
Source : Adapted from IOM Regional Office Pretoria M&E presentation on Scope of Monitoring (2017).
While these are the four essential areas to monitor at IOM, additional types of monitoring are outlined in chapter 3 of the IOM Monitoring and Evaluation Guidelines .
In order to standardize its approach to monitoring, IOM has developed relevant standardized tools: (a) IOM Results Matrix; and (b) Results Monitoring Framework. 9 Despite this, it may still be a challenge for IOM staff to tailor these tools and adapt them to the monitoring needs of the diverse portfolio of context-specific interventions it implements and migration needs. Therefore, how to monitor within IOM largely depends on how IOM responds to particular migration-related needs within an intervention. Monitoring should be sufficiently flexible to then allow for an assessment of whether interventions respond to emerging needs.
Monitoring is necessary, because it continuously generates the information needed to measure progress towards results throughout implementation and enables timely decision-making. Monitoring helps decision makers be anticipatory and proactive, rather than reactive, in situations that may become challenging to control. It can bring key elements of strategic foresight to IOM interventions.
Monitoring is undertaken on an ongoing basis during the implementation of an intervention. Where possible, it is essential to ask relevant “monitoring questions” regularly.
• Planned activities are actually taking place (within the given time frame);
• There are gaps in the implementation;
• Resources have been/are being used efficiently;
• The intervention’s operating context has changed.
While implementing activities: | When measuring results: |
Monitoring is an established practice of internal oversight that provides management with an early indication of progress, or lack thereof, in the achievement of results, in both operational and financial activities. | Monitoring at IOM is a routine – but important – process of data collection and analysis, as well as an assessment of progress towards intervention objectives. In other words, it allows for the frequent assessment of the implementation process within IOM interventions. | Due to the different thematic areas and diverse approaches to responding to country, regional or global needs and expectations, a standardized approach to monitoring IOM interventions remains challenging. Monitoring needs to be flexible enough to assess whether and how IOM’s interventions are responding to emerging needs. Chapters 2, 3 and 4 of the will provide more details on how monitoring achieves this. |
2017 Module 2 and Module 4 . In: IOM Project Handbook . Second edition. Geneva (Internal link only).
2018b Monitoring Policy . IN/31. 27 September.
International Federation of Red Cross and Red Crescent Societies ( IFRC )
2011 Project/Programme Monitoring and Evaluation (M&E) Guide . Geneva.
While monitoring may ask the questions, “What is the current status of implementation? What has been achieved so far? How has it been achieved? When has it been achieved?”, evaluation helps, in addition, to understand why and how well something was achieved, and gives judgement on the worth and merit of an intervention. Evaluation allows for a more rigorous analysis of the implementation of an intervention, also looking at why one effort worked better than another. Evaluation enriches learning processes and improves services and decision-making capability for those involved in an intervention. It also provides information not readily available from monitoring, which can be derived from the use of evaluation criteria, such as in-depth consideration for impact, relevance, efficiency, effectiveness, coverage, coordination, sustainability, connectedness and coherence.
IOM defines evaluation as the systematic and objective assessment of an ongoing or completed intervention, including a project, programme, strategy or policy, its design, implementation and results.
Evaluation can be considered a means to discuss causality. While monitoring may show whether indicators have progressed, it remains limited in explaining, in detail, why a change occurred. Evaluation, on the other hand, looks at the question of what difference the implementation of an activity and/or intervention has made. It helps answer this question by assessing monitoring data that reflects what has happened and how , to identify why it happened. Evaluation provides practitioners with the required in-depth and evidence-based data for decision-making purposes, as it can assess whether, how, why and what type of change has occurred during an intervention.
Evaluation is also critical to assess the relevance and performance of the means and progress towards achieving change. Effective conduct and the use of credible evaluations go hand in hand with a culture of results-oriented, evidence-driven learning and decision-making. When evaluations are used, they contribute not only to accountability, but also to creating space for reflection, learning and the sharing of findings, innovations and experiences. They are a source of reliable information to help improve IOM ’s service provision to beneficiaries, migrants, Member States and donors. Findings, lessons learned and best practices from previous evaluations can also help enhance an intervention design and enrich the formulation of results and the results framework. Evaluations have their own methodological and analytical rigour, determined at the planning stage and depending on their intention and scope.
An evaluation can be conducted at every stage of the intervention cycle, depending on the type of evaluation being implemented. For example, an ex-ante evaluation conducted during the conceptualization phase of an intervention can set a strong foundation for a successful implementation. Evaluations conducted during implementation (for instance, real-time and midterm evaluations ) are good sources for p r o viding f eedback on the s t atus and p r og r es s , s t r en g ths or w ea k nesses of implemen t ation. 11 , 12 In this sense, evaluations provide decision makers with timely information to make adjustments, as required.
Evaluation should not be confused with concepts, such as review, assessment, needs assessments/appraisals or audit. Refer to the following definitions: 13
Review | According to the glossary, a is “an assessment of the performance of an intervention, periodically or on an ad hoc basis”. A review is more extensive than monitoring but less than evaluation. |
Assessment | An can commonly be defined as the action of estimating the nature, ability or quality of something. In the context of development interventions, it is often associated with another term to focus on what will be assessed, such as needs assessment, skills assessment, context assessment and results-based assessment. It can take place prior, during or after an intervention and may be used in an evaluative context. |
Needs assessments and appraisals | are tools enabling decision makers to choose and decide between optional activities, as well as refine the final design of a project or programme. |
Audit | as an activity of supervision verifying whether the existing policies, norms and instruments are being applied and used adequately. Audit also examines the adequacy of organizational structures and systems and performs risk assessments. The audit focuses on the accountability and control of the efficient use of resources. |
2018c IOM Evaluation Policy . Office of the Inspector General. September.
2010 Glossary of Key Terms in Evaluation and Results Based Management . OECD/DAC , Paris.
Although often grouped together, M&E are two distinct but related functions. Recognizing the difference between monitoring and evaluation helps those implementing interventions understand that the two are indeed complimentary, as well as mutually beneficial functions. The main difference between them is their focus of assessment , as well as the timing in which each is conducted .
Monitoring , on the one hand, focuses on whether the implementation is on track to achieving its intended results and objectives, in line with established benchmarks. Evaluation , on the other hand, can provide evidence on whether the intervention and its approach to implementation is the right one, and if so, how and why changes are taking place. Evaluation also highlights the strengths and weaknesses of the design of the intervention. In other words, while monitoring can provide information on how the implementation is doing, evaluation can go a step further and demonstrate whether the expected change has been attained, whether the intervention contributed to that change ( impact analysis/evaluation ) and whether the intervention itself and its approach were the most suited to address the given problem.
In terms of timing, while monitoring tracks an intervention’s progress and achievement of results on an ongoing basis, throughout implementation, evaluation is usually a one-off activity, undertaken at different points of an intervention’s life cycle.
Keeping the vertical logic in mind when monitoring an intervention is useful, as it can help understand the specific level of result, which is being monitored, and, moreover, how individual results contribute to the overall implementation objectives. 15 In this sense, monitoring can function as a tool that can help review the management objectives. Similarly, when evaluating an intervention, it is important to consider its vertical logic to enable a more holistic approach to evaluation.
The following two diagrams show monitoring and evaluation in relation to the vertical logic. Chapter 3 of the IOM Monitoring and Evaluation Guidelines will further elaborate the vertical logic. Note that the two diagrams include indicative questions that pertain to monitoring and evaluation, and that there may be many other questions applicable in the context of vertical logic that are not included in the following figures.
Figure 1.3. Monitoring and vertical logic
Figure 1.4. Evaluation and vertical logic
Source : Adapted from IFRC, 2011 . See also OECD, n.d.
Key differences between monitoring and evaluation | |
---|---|
Monitoring | Evaluation |
Monitoring is the continuous, systematic collection of data/information throughout the implementation of an intervention as part of intervention management. It focuses on the implementation of an intervention, comparing what is delivered to what was planned | Evaluation is a scheduled, periodic and in-depth assessment at specific points in time (before, during, at the end of or after an intervention). It is a specific process that assess this success of an intervention against an established set of evaluation criteria. |
It is usually conducted by people directly involved in implementing the intervention. | It is usually conducted by people not having directly participated in the intervention. |
It routinely collects data against indicators and compares achieved results with targets. | It assesses causal contributions of interventions to results and explores unintended results. |
It focuses on tracking the progress of regular or day-to-day activities during implementation. | It assesses whether, why and how well change has occurred and whether the change can be attributed to the intervention. |
It looks at production of results at the output and outcome level. | It looks at performance and achievement of results at the output, outcome, as well as the objective level. |
It concentrates on planned intervention elements. | It assesses planned elements and looks for unplanned change, searches for causes, challenges, risks, assumptions and sustainability |
n.d. OECD DAC Criteria for Evaluating Development Assistance .
This section focuses on the strategic orientation at IOM 16 and how it relates to M&E .
The Strategic Vision spans 2019–2023 and is the Director General’s articulation of how IOM as an organization needs to develop over a five-year period in order to meet new and emerging responsibilities at the global, regional, country and project levels. The Strategic Vision will guide the Organization into the future and turn IOM’s aspirations into reality.
It has a number of different components, including the following:
The Strategic Vision is operationalized through the SRF, which defines four overarching global objectives for the Organization, accompanied by a limited number of long-term and short-term outcomes and outputs that articulate how these highest-level objectives will be reached. These high-level results and the key performance indicators that help measure them can and should be used within projects and programmes to ensure alignment with the Strategic Vision and other key global frameworks like the SDGs and the Global Compact for Migration.
a) Be familiar with the Strategic Vision and the institutional results framework.
b) Where possible, projects should be aligned to the SRF at the outcome or output levels.
c) Regional and country offices should align any future country or regional strategies with the Strategic Vision and the SRF, although they still have flexibility to adjust for local needs.
MiGOF 17 was endorsed by IOM Member States at the IOM Council in 2015. MiGOF is now the overarching framework for all of the Organization’s work. MiGOF is linked to the SDGs and represents an ideal for migration governance to which States can aspire.
The propose the necessary conditions for migration to be well-managed by creating a more effective environment for maximized results for migration to be beneficial to all. These represent the means through which a State will ensure that the systemic requirements for good migration governance are in place. | The are specific and do not require any further conventions, laws or practices than the ones that are already existing. Taken together, these objectives ensure that migration is governed in an integrated and holistic way, responding to the need to consider mobile categories of people and address their needs for for assistance in the event of an emergency, building resilience of individuals and communities, as well as ensuring opportunities for the economic and social health of the State. |
Source : IOM, 2016b .
MiGOF is a migration system that promotes human mobility, which benefits migrants and society, when it:
The system also seeks to:
The SDGs 18 were adopted by the United Nations General Assembly in September 2015. With the SDGs, migration has, for the first time, been inserted into mainstream development policy. The central reference to migration in the 2030 Agenda is Target 10.7 under the goal “Reduce inequality in and among countries”. It is a call to “facilitate orderly, safe, regular and responsible migration and mobility of people, including through the implementation of planned and well-managed migration policies”. However, migration and migrants are directly relevant to the implementation of all the SDGs and many of their targets. The SDGs, and the commitment to leave no one behind and to reach the furthest behind, will not be achieved without due consideration of migration. IOM ’s Migration and the 2030 Agenda: A Guide for Practitioners outlines these interlinkages in detail.
As part of IOM ’s effort to track progress on the migration aspects of the SDGs, IOM and the Economist Intelligence Unit p ublished a Migration Governance Index in 2016. Based on MiGOF categories, the Index, which is the first of its kind, provides a framework for countries to measure their progress towards better migration governance at the policy level.
Within IOM’s institutional strategy on migration and sustainable development, IOM has committed to three main outcomes: (a) human mobility is increasingly a choice; (b) migrants and their families are empowered; and (c) migration is increasingly well-governed. To achieve these outcomes, IOM has committed to four institutional outputs: (a) improved policy capacity on migration and sustainable development through a more robust evidence base and enhanced knowledge management; (b) stronger partnerships across the United Nations development system and beyond that harness the different expertise and capabilities of relevant actors on migration and sustainable development; (c) increased capacity to integrate migration in t he pla nnin g , implemen t ation, monitoring a nd r epo r ting of t he 2030 A genda ; a nd ( d) high - qua li t y migration programming that contributes to positive development outcomes.
In relation to output (a), having a stronger evidence base on migration and sustainable development is crucial if the development potential of migration will be capitalized. Enhancing IOM’s capacity to apply quality M&E in its programming from a development perspective will be crucial in this regard. This will also help enhance IOM’s capacity to showcase how its work supports the achievement of the 2030 Agenda through high-quality programming that contributes to development outcome, as outlined in output (d). IOM also has the responsibility to support its Member States achieve the same and ensure that monitoring, evaluation and reporting on migration governance efforts is aligned with and contribute to their efforts to achieve the 2030 Agenda. Thus, output (b) on building stronger partnerships across the United Nations development system and beyond will be crucial to ensure that migration is firmly featured in UNSDCF and other development agendas, as well as national and local policies and programming. IOM’s role as coordinator of the United Nations Network on Migration will allow the Organization to achieve this within UNCTs. IOM has developed an action plan to achieve all of this and which is driven by IOM’s Migration and Sustainable Development Unit and overseen by IOM’s organization-wide Working Group on the SDGs.
The UNSDCF 19 (formerly the United Nations Development Assistance Framework or UNDAF ) is now “the most important instrument for planning and implementation of the United Nations development activities at country level in support of the implementation of the 2030 Agenda for Sustainable Development”. 20
It is a strategic medium-term results framework that represents the commitment of the UNCT of a particular country to supporting that country’s longer-term achievement of the SDGs. Furthermore, it is intended as an instrument that drives strategic planning, funding, implementation, monitoring, learning, reporting and evaluation for the United Nations, in partnership with host governments and other entities.
The UNSDCF explicitly seeks to ensure that government expectations of the United Nations development system will drive its contributions at the country level and that these contributions emerge from an analysis of the national landscape vis-à-vis SDG priorities. It is therefore “the central framework for joint monitoring, review, reporting and evaluation of the United Nations development system’s impact in a country achieving the 2030 Agenda [for Sustainable Development]”. 21
For more information regarding the UNSDCF, see The Cooperation Framework .
The Migration Crisis Operational Framework 22 ( MCOF ) was approved by IOM Council in 2012 and combines humanitarian activities and migration management services. Some of the key features of MCOF are as follows:
MCOF helps crisis-affected populations, including displaced persons and international migrants stranded in crisis situations in their destination/transit countries, to better access their fundamental rights to protection and assistance.
MCOF should be adapted to each context and can be used for analysing the migration patterns in a country and developing a strategic direction of a country together with MiGOF . Projects and programmes should be aligned to MCOF, and MCOF strategy progress should be monitored through specific and measurable results
The Global Compact for Migration is the first intergovernmentally negotiated agreement, prepared under the auspices of the United Nations, covering all dimensions of international migration in a holistic and comprehensive manner. It is a non-binding document that respects States’ sovereign right to determine who
enters and stays in their territory and demonstrates commitment to international cooperation on migration. It presents a significant opportunity to improve the governance of migration to address the challenges associated with today’s migration, as well as strengthen the contribution of migrants and migration to sustainable development. The Global Compact for Migration is framed in a way consistent with Target 10.7 of the 2030 Agenda in which Member States commit to cooperate internationally to facilitate safe, orderly and regular migration. The Global Compact for Migration is designed to:
The Global Compact for Migration contains 23 objectives for improving migration management at all levels of government. The 23 objectives can be found in paragraph 16 of the United Nations General Assembly Resolution adopting the Global Compact for Safe, Orderly and Regular Migration. 23
2012 Resolution No. 1243 on Migration Crisis Operational Framework . Adopted on 27 November.
2016a IOM Chiefs of Mission Handbook 2016 . Geneva (Internal link only).
2016b Migration Governance Framework . Brochure. Geneva.
2018d Migration and the 2030 Agenda: A Guide for Practitioners . Geneva.
2020b Strategic Vision: Setting a Course for IOM . Geneva.
United Nations
2018a United Nations General Assembly Resolution 72/279 on Repositioning of the United Nations development system in the context of the quadrennial comprehensive policy review of operational activities for development of the United Nations System . Adopted on 31 May (A/RES/72/279).
2018b United Nations General Assembly Resolution 73/195 on the Global Compact for Safe, Orderly and Regular Migration . Adopted on 19 December (A/RES/73/195).
n.d. United Nations Sustainable Development Goals .
United Nations Sustainable Development Group ( UNSDG )
2019 United Nations Sustainable Development Cooperation Framework – Internal Guidance .
The pursuit of performance excellence, critical thinking.
Critical thinking refers to the process of actively analyzing, assessing, synthesizing, evaluating and reflecting on information gathered from observation, experience, or communication. It is thinking in a clear, logical, reasoned, and reflective manner to solve problems or make decisions. Basically, critical thinking is taking a hard look at something to understand what it really means.
Critical thinkers do not simply accept all ideas, theories, and conclusions as facts. They have a mindset of questioning ideas and conclusions. They make reasoned judgments that are logical and well thought out by assessing the evidence that supports a specific theory or conclusion.
When presented with a new piece of new information, critical thinkers may ask questions such as;
“What information supports that?”
“How was this information obtained?”
“Who obtained the information?”
“How do we know the information is valid?”
“Why is it that way?”
“What makes it do that?”
“How do we know that?”
“Are there other possibilities?”
Many people perceive critical thinking just as analytical thinking. However, critical thinking incorporates both analytical thinking and creative thinking. Critical thinking does involve breaking down information into parts and analyzing the parts in a logical, step-by-step manner. However, it also involves challenging consensus to formulate new creative ideas and generate innovative solutions. It is critical thinking that helps to evaluate and improve your creative ideas.
Critical thinking involves:
Critical thinking is considered a higher order thinking skills, such as analysis, synthesis, deduction, inference, reason, and evaluation. In order to demonstrate critical thinking, you would need to develop skills in;
Interpreting : understanding the significance or meaning of information
Analyzing : breaking information down into its parts
Connecting : making connections between related items or pieces of information.
Integrating : connecting and combining information to better understand the relationship between the information.
Evaluating : judging the value, credibility, or strength of something
Reasoning : creating an argument through logical steps
Deducing : forming a logical opinion about something based on the information or evidence that is available
Inferring : figuring something out through reasoning based on assumptions and ideas
Generating : producing new information, ideas, products, or ways of viewing things.
Blooms Taxonomy
Bloom’s Taxonomy Revised
Mind Mapping
Chunking Information
Brainstorming
Copyright © 2024 | WordPress Theme by MH Themes
Monitoring and evaluation are essential to any project or program. Through this process, organizations collect and analyze data, and determine if a project/program has fulfilled its goals. Monitoring begins right away and extends through the duration of the project. Evaluation comes after and assesses how well the program performed. Every organization should have an M&E system in place. Here are ten reasons why:
Because organizations track, analyze, and report on a project during the monitoring phase, there’s more transparency. Information is freely circulated and available to stakeholders, which gives them more input on the project. A good monitoring system ensures no one is left in the dark. This transparency leads to better accountability. With information so available, organizations need to keep everything above board. It’s also much harder to deceive stakeholders.
Projects never go perfectly according to plan, but a well-designed M&E helps the project stay on track and perform well. M&E plans help define a project’s scope, establish interventions when things go wrong, and give everyone an idea of how those interventions affect the rest of the project. This way, when problems inevitably arise, a quick and effective solution can be implemented.
Every project needs resources. How much cash is on hand determines things like how many people work on a project, the project’s scope, and what solutions are available if things get off course. The information collected through monitoring reveals gaps or issues, which require resources to address. Without M&E, it wouldn’t be clear what areas need to be a priority. Resources could easily be wasted in one area that isn’t the source of the issue. Monitoring and evaluation helps prevent that waste.
Mistakes and failures are part of every organization. M&E provides a detailed blueprint of everything that went right and everything that went wrong during a project. Thorough M&E documents and templates allow organizations to pinpoint specific failures, as opposed to just guessing what caused problems. Often, organizations can learn more from their mistakes than from their successes.
Data should drive decisions. M&E processes provide the essential information needed to see the big picture. After a project wraps up, an organization with good M&E can identify mistakes, successes, and things that can be adapted and replicated for future projects. Decision-making is then influenced by what was learned through past monitoring and evaluation.
Developing a good M&E plan requires a lot of organization. That process in itself is very helpful to an organization. It has to develop methods to collect, distribute, and analyze information. Developing M&E plans also requires organizations to decide on desired outcomes, how to measure success, and how to adapt as the project goes on, so those outcomes become a reality. Good organizational skills benefit every area of an organization.
Organizations don’t like to waste time on projects or programs that go nowhere or fail to meet certain standards. The benefits of M&E that we’ve described above – such as catching problems early, good resource management, and informed decisions – all result in information that ensures organizations replicate what’s working and let go of what’s not.
Monitoring and evaluation can help fuel innovative thinking and methods for data collection. While some fields require specific methods, others are open to more unique ideas. As an example, fields that have traditionally relied on standardized tools like questionnaires, focus groups, interviews, and so on can branch out to video and photo documentation, storytelling, and even fine arts. Innovative tools provide new perspectives on data and new ways to measure success.
With monitoring and evaluation, the more information the better. Every team member offers an important perspective on how a project or program is doing. Encouraging diversity of thought and exploring new ways of obtaining feedback enhance the benefits of M&E. With M&E tools like surveys, they’re only truly useful if they include a wide range of people and responses. In good monitoring and evaluation plans, all voices are important.
While certain organizations can use more unique M&E tools, all organizations need some kind of monitoring and evaluation system. Whether it’s a small business, corporation, or government agency, all organizations need a way to monitor their projects and determine if they’re successful. Without strong M&E, organizations aren’t sustainable, they’re more vulnerable to failure, and they can lose the trust of stakeholders.
15 May 2024
12 August 2023
6 August 2023
This website may not work correctly because your browser is out of date. Please update your browser .
Triangulation facilitates validation of data through cross verification from more than two sources.
It tests the consistency of findings obtained through different instruments and increases the chance to control, or at least assess some of the threats or multiple causes influencing our results.
Triangulation is not just about validation but about deepening and widening one’s understanding. It can be used to produce innovation in conceptual framing. It can lead to multi-perspective meta-interpretations. [Triangulation is an] attempt to map out, or explain more fully, the richness and complexity of human behavior by studying it from more than one standpoint? - Cohen and Manion
Denzin (1973, p.301) proposes four basic types of triangulation:
Carvalho and White (1997) propose four reasons for undertaking triangulation:
The problem with relying on just one option is to do with bias. There are several types of bias encountered in research, and triangulation can help with most of them.
An evaluation matrix, as shown below, will help you check that the planned data collection will cover all the KEQs , see if there is sufficient triangulation between different data sources, and help you design questionnaires, interview schedules, data extraction tools for project records, and observation tools, to ensure they gather the necessary data.
Participant Questionnaire | Key Informant Interviews | Project Records | Observation of program implementation | |
---|---|---|---|---|
KEQ1 What was the quality of implementation? | ✔ | ✔ | ✔ | ✔ |
KEQ2 To what extent were the program objectives met? | ✔ | ✔ | ✔ | |
KEQ3 What other impacts did the program have? | ✔ | ✔ | ||
KEQ4 How could the program be improved? | ✔ | ✔ | ✔ |
Carvalho, S. and White, H. (1997). Combining the quantitative and qualitative approaches to poverty measurement and analysis: The practice and the potential . World Bank Technical Paper 366. Washington, D.C.: World Bank
Cohen, L. & Manion, L. Research methods in education . Routledge.
Denzin, Norman K. (1973). The research act: A theoretical introduction to sociological methods. New Jersey: Transaction Publishers.
Kennedy, Patrick. (2009). How to combine multiple research options: Practical Triangulation. http://johnnyholland.org/2009/08/20/practical-triangulation (archived link)
This page is a Stub (a minimal version of a page). You can help expand it. Contact Us to recommend resources or volunteer to expand the description.
Back to top
© 2022 BetterEvaluation. All right reserved.
Planning, monitoring and evaluation are at the heart of a learning-based approach to management. Achieving collaborative, business/environmental or personal goals requires effective planning and follow-through. The plan is effectively a “route-map” from the present to the future. To plan a suitable route you must know where you are (situation analysis) and where you want to go (establish goals and identify outcomes). Only then can appropriate action plans be developed to help achieve the desired future.
However, because the future is uncertain, our action plans must be adaptive and allow continually for “learning by doing”. To do this we need appropriate monitoring and evaluation (m&e) tools and processes, and information flows that help the different stakeholders involved check that their efforts are proceeding as planned, and to refine and guide their responses if changes are needed.
Both sets of plans are best developed in conjunction with the people who will carry them out, as they are then more likely to actually do so. As the accompanying diagram shows, there are two sets of monitoring plans needed. Results monitoring focusses on whether you are getting where you want to go, while process monitoring focusses on how efficiently you are getting there.
Worldwide there is a trend towards an increased use of indicators to monitor development and track progress. Indicators quantify and simplify phenomena, and help us understand and make sense of complex realities. Indicators in this regard may be either qualitative or quantitative, and a combination of the two is often best. An evaluation is like a good story, it needs some (qualitative) anecdotal evidence to put the lesson in context, and some (quantitative) facts and figures that reinforce the message.
Often people talk about logic models and theory of change processes interchangeably, Logic models – such as the ones above – connect programmatic activities to client or stakeholder outcomes. But a theory of change goes further, specifying how to create a range of conditions that help programmes deliver on the desired outcomes. These can include setting out the right kinds of partnerships, types of forums, particular kinds of technical assistance, and tools and processes that help people operate more collaboratively and be more results focused.
The diagram below shows an outcomes or logic model approach to project planning. This describes logical linkages among programme resources, activities, outputs, and audiences, and highlights different orders of outcomes related to a specific problem or situation. Importantly, once a programme has been described in terms of the logic model, critical measures of performance can be identified. In this way logic models can be seen to support both planning and evaluation. As the diagram below shows different evaluation types/approaches can be used to measure different parts of the overall project or change initiative.
With the broader monitoring and evaluation context there are a number of framings that will influence the final approaches chosen. These are highlighted on different pages here. Different types of evaluations answer different questions . Related frameworks can provide ideas as to the scale and levels of programme intensity to be considered by stakeholders. Within these overall framings more examples of different evaluation techniques that can be used can be found from the different approaches page.
A common focus of monitoring and evaluation (M&E) is to report on how you are tracking against outputs and short or long-term outcomes. While reporting on outputs and outcomes is an important purpose for projects, learning from these activities is equally important. A monitoring, evaluation and learning (MEL) approach encourages and fosters a focus on monitoring performance, selectively evaluating activities, and supporting continuous learning .
MEL assists organizations to clarify intentions, to collect essential data for measuring efficiency and impact, and to identify and monitor levers for change. Ideally, MEL processes also include a realistic evaluation of capability and capacity (internally and externally across the decision-making setting) to respond and adapt with agility.
Other related site pages include links to guides on managing participation and improving facilitation. By fostering good participation in planning, monitoring and evaluation we can support empowerment, motivation and strengthened relationships . This can help in supporting innovative project and development approaches through social learning , the improved use of indicators to monitor development and track progress, and adaptive management .
Who is a monitoring and evaluation officer.
As a Monitoring and Evaluation Officer, you will develop and implement monitoring and evaluation frameworks, collect and analyze data, and generate reports to inform decision-making and improve program outcomes. Your role involves working closely with project teams, stakeholders, and partners to ensure accountability and learning.
What does a monitoring and evaluation officer do?
A Monitoring and Evaluation Officer designs, implements, and manages monitoring and evaluation systems to assess the effectiveness and impact of programs, projects, or interventions, collecting and analyzing data to inform decision-making and improve program outcomes.
What qualifications are required to become a monitoring and evaluation officer?
Typically, a bachelor's degree in social sciences, international development, statistics, or a related field is required, along with proven experience in monitoring and evaluation roles. Strong analytical, communication, and project management skills are essential for success in this role.
Want to hire for this role?
Looking for monitoring and evaluation officer job?
Join our happy subscribers
Vol. 55 No. 6 Print version: page 10
At least 21 state legislatures have taken steps to reform K–12 media and information literacy education, with California, Delaware, Illinois, and New Jersey passing comprehensive reforms ( U.S. Media Literacy Policy Report, Media Literacy Now , 2024 ). The largely bipartisan efforts are a response to challenges that most school curriculums do not yet address or teach—skills like sorting out what is true or false online, identifying when content is produced by artificial intelligence (AI), and how to use social media safely.
“We’ve all seen how the spread of online misinformation and disinformation is growing and that it has real-world consequences,” said Assemblymember Marc Berman, JD, an attorney who represents California’s 23rd District and spearheaded the state’s digital literacy education law. “I can’t force adults to go back to school and take media literacy, but at a minimum, we can make sure that our young people are getting the skills they need for today’s world.”
People of all ages are susceptible to misinformation, but youth—who spend an average of 4 to 6 hours per day online— say they need help . In one survey of young adults in Canada, 84% were unsure they could distinguish fact from fiction on social media ( Youth Science Survey , Canada Foundation for Innovation, 2021 ). In a study led by educational psychologist Sam Wineburg, PhD, 82% of middle school students could not tell the difference between an online news story and an advertisement ( Evaluating Information: The Cornerstone of Civic Online Reasoning , Stanford Digital Repository, 2016 ).
“It’s those kinds of findings that have gotten the attention of legislators,” said Wineburg, who is an emeritus professor at Stanford University and cofounder of the Digital Inquiry Group (DIG), a nonprofit that creates free research-backed digital literacy tools for educators.
“Increasingly, as young people’s apps of choice are TikTok and YouTube, the adults have woken up to the fact that quality information is to civic understanding what clean air and water are to civic health,” Wineburg said.
The most comprehensive programs, which are now being developed and tested for K–12 audiences, also aim to teach students how to locate and assess the source of online information and to think critically about how generative AI produces content. They also teach students about digital citizenship, which involves engaging respectfully with others online.
Psychologists are a key part of those efforts. In its 2023 Health Advisory on Social Media Use in Adolescence , APA recommended psychologically informed media literacy training for youth, guidance echoed by U.S. Surgeon General Vivek H. Murthy. What is needed now is ongoing research on what works, as well as strong collaboration with journalists, educators, and policymakers to swiftly put research insights into practice.
This year, APA also released an updated scientific roundup focused on the risks of social media content, features, and functions . The report also provides concrete recommendations for minimizing psychological harm, including tips for monitoring use.
“To me, this is really one of the most important things we can be doing right now as psychologists, given how misinformation has made science political in ways that are really frightening,” said Susan Nolan, PhD, a professor of psychology at Seton Hall University in New Jersey who studies and advocates for scientific literacy.
While social media platforms typically require users to be 13 or older, most adolescents create accounts before then, at a time when their brains are particularly vulnerable to social influence ( The Common Sense Census: Plugged-In Parents of Tweens and Teens, Common Sense Media , 2016 ). In addition to the interpersonal risks of getting online, surveys show that adolescents are more likely to believe conspiracy theories than adults—particularly those adolescents who spend a lot of time on social media (“ Belief in Conspiracy Theories Higher Among Teenagers Than Adults, as Majority of Americans Support Social Media Reform, New Polling Finds ,” Center for Countering Digital Hate, Aug. 16, 2023).
“Media literacy is literacy in the 21st century, and we don’t start teaching literacy in high school,” said Erin McNeill, founder and CEO of Media Literacy Now , an organization dedicated to K–12 media literacy reform. “It’s an essential life skill that has to be built on a foundation, not rolled out at the last minute.”
Psychological research has played an important role in demonstrating the need for starting media literacy training early and in passing corresponding educational reforms at the state level. In a 2021 study by Wineburg and his colleagues, 3,446 census-matched high school students were tasked with investigating a website, CO2 Science , and evaluating whether it provided reliable information about human-induced climate change. Only 4% of students discovered that the site’s chief sponsor was ExxonMobil ( Educational Researcher, Vol. 50, No. 8, 2021 ).
More than half of the students in the study also believed that a Facebook video that appeared to show ballot stuffing, shot in Russia and posted anonymously, was “strong evidence” of U.S. voter fraud.
“We leaned on these studies when justifying the legislation because they show how the internet and social media make it a lot easier to select only the information that supports our preexisting beliefs, rather than providing a more balanced view,” said Berman, who also pointed to APA’s 2023 Health Advisory on Social Media Use in Adolescence to support the need for policy reform.
Drawing on psychological research, APA’s latest guidance recommends a series of digital literacy competencies that can provide a starting point for policymakers. Those include understanding the tactics used to spread mis- and disinformation, limiting overgeneralizations that lead people to incorrectly interpret others’ beliefs, and helping young people learn to nourish healthy online relationships.
“Developmentally, adolescents are especially vulnerable to the features of social media that are designed to keep users online, such as likes, push notifications, autoplay, and algorithms that deliver extreme content,” said Sophia Choukas-Bradley, PhD, an associate professor of psychology at the University of Pittsburgh who contributed to both APA reports. “As psychologists, we need to provide teens with digital literacy and skills to combat these design features while simultaneously pushing for policies that require tech companies to change the platforms themselves.”
With legislation now in place, New Jersey’s Department of Education is crafting its detailed information literacy standards, drawing on APA’s Resolution on Combating Misinformation and Promoting Psychological Science Literacy (PDF, 53KB) in the process. The curriculum will include training on such topics as the scientific method, the difference between primary and secondary sources, how to differentiate fact from opinion, and the ethical production of information (including data ethics).
“When you look at what is in the curriculum, really all of it ultimately ties to psychology,” Nolan said about the New Jersey law.
Progress at the state level is meaningful, but mandates do not necessarily equal action. It can take years for state educational boards to develop and implement curriculum reforms, especially if research has not clearly shown what works.
“It’s one thing to pass a law, but it’s quite another to develop and fund evidence-based professional development programs for teachers, many of whom do not feel up to this task” without further training, Wineburg said.
Policymakers, educators, librarians, and even journalists are putting their heads together to decide what and how to teach media literacy to kids and teens. But those on the front lines also stress the importance of sound science that can guide the development of interventions from the get-go.
“What happens often in K–12 education is we get separated from the research,” said Kathryn Procope, EdD, executive director at Howard University Middle School of Mathmatics and Science in Washington, D.C. “Getting connected with what the research says can help educators sit down collectively and decide what we’re going to do” when new challenges arise.
DIG offers one solution: its Civic Online Reasoning program, a free curriculum that teaches lateral reading—a fact-checking method where readers evaluate source credibility, such as by searching for background in a separate browser tab. The program also teaches skills such as click restraint, the strategy of looking past the first results suggested by search engines to results from more credible sources.
“Behind lateral reading is the idea that we need to think about online information in a fundamentally different way,” Wineburg said. “Rather than immediately looking at the claim, we want people asking: Who is the person or the organization behind this claim?”
Studies of lateral reading interventions show that they can change the way young people interact with information online. Students who completed six 50-minute lessons in a field study across six Lincoln, Nebraska, high schools were significantly more accurate in assessing source credibility than their peers who did not get the intervention ( Journal of Educational Psychology , Vol. 114, No. 5, 2022 ). In Canada, 2,278 middle and high school students completed the CRTL-F lateral reading program. Beforehand, only 6% could identify the agenda of an advocacy group, but that number rose to 31% after the intervention and to 49% 6 weeks later ( Brodsky, J. E., et al., AERA Open , Vol. 9, 2023; The Digital Media Literacy Gap , CIVIX Canada, 2021 ).
Research conducted in Germany and Italy also found that lateral reading helped news consumers identify false information online, and that pop-up reminders and monetary incentives can increase the practice of lateral reading and click restraint ( Fendt, M., et al., Computers in Human Behavior , Vol. 146, 2023 ; Panizza, F., et al., Scientific Reports , Vol. 12, 2022 ).
Choukas-Bradley is working with the Center for Digital Thriving at Harvard Graduate School of Education and Common Sense Media to develop and evaluate resources that educate adolescents about the social media features designed to keep them online, as well as to teach cognitive and behavioral techniques that promote healthier social media use.
“We listen closely to students and then trace the connections to key evidence-based practices,” said Emily Weinstein, EdD, cofounder of the Center for Digital Thriving, which offers resources codesigned by educators, students, and clinical psychologists.
For example, teens share common thinking traps that are amplified by tech, such as “everyone on social media is happier than me,” or “my friend must be mad if they haven’t responded to my Snap.” Both are examples of cognitive distortions, for which psychologists have a robust evidence base.
“There’s real power in the idea that ‘if you can name it, you can tame it,’ which is one reason we want every student to know about common thinking traps,” Weinstein said.
Educators and researchers are aware of the irony behind adults teaching digital natives how to use platforms with which they are already intimately familiar. For that reason, some are working with kids and teens to teach digital literacy in ways that are meaningful to them.
“Students are far ahead of educators when it comes to using new technologies, so the more that young people are involved in the design of the curriculum that will be used to teach media literacy in 2024 and beyond, the better,” said Chelsea Waite, a principal investigator at the Center on Reinventing Public Education at Arizona State University’s Mary Lou Fulton Teachers College who studies innovative practices at K–12 schools across the United States.
DIG has partnered with Microsoft to integrate information literacy quests that focus on exploring bias and persuasion—for example, when information is trustworthy enough to be shared with others—into the video game Minecraft. Mizuko Ito, PhD, a cultural anthropologist who has studied youth-centered learning for years and directs the Connected Learning Lab at the University of California, Irvine, coleads the Connected Learning Alliance , which fosters partnerships between researchers, developers, and youth to generate new technologies that prioritize connection and well-being rather than profit. One of the organization’s latest projects, Connected Camps , pairs 8- to 13-year-old gamers with college gamers to learn about digital citizenship and to become part of a prosocial online community.
“We know that it’s so much more effective to do online literacy learning and skills development within the context of something youth actually care about, like the gaming universe,” Ito said.
Other youth media organizations are leveraging content young people care about to equip and empower them to create positive online spaces. The This Teenage Life podcast, for example, is a school-based program that teaches kids to produce a podcast while thinking critically about how to engage with today’s digital ecosystem and be a good citizen online.
“As educators, we have to remember that young people nowadays are going to ask: Why am I learning this? It doesn’t have anything to do with what I care about,” Procope said. “That means that we have to do what we’re doing a lot differently.”
[ Related: New approaches to AI in the K-12 classroom ]
The online world has wrought so much change that many experts say education must fundamentally change, too.
“Right now, the approach is to treat information literacy as a patch to put on the whole of the curriculum,” Wineburg said. “But really the challenge, when students are leading digital lives, is to fundamentally rethink the entire curriculum we have.”
That’s a tall order, but a starting point is to interweave digital and media literacy lessons throughout multiple courses rather than treat the subject as a separate entity. For example, a high school biology lesson about vaccines will be more meaningful to students if it acknowledges and addresses the pseudoscientific information they see daily on TikTok, such as the supposed health benefits of castor oil, Wineburg said. Another idea: Students can learn about the strengths and weaknesses of ChatGPT in a history class by asking questions about a historical event where the facts are unclear, such as who fired the first shot in the Battle of Lexington, the first volley in the Revolutionary War.
“Whether it’s debunking pseudoscience on social media or understanding the nuances of AI in history class, every subject offers an opportunity to cultivate these skills,” said Nicole Barnes, PhD, senior director of APA’s Center for Psychology in Schools and Education (CPSE). “After all, we’re not just preparing students for exams but for life in a digital world. This is exactly what we are doing in the CPSE—providing pre-K–12 educators with teaching and learning resources that are grounded in psychological science.”
Several states are aiming for such integration by giving librarians a central role in administering media literacy training throughout schools. The International Society for Technology in Education (ISTE) also recommends a comprehensive approach to K–12 training on technology and online media.
“The people leading these efforts—from national organizations to state legislators—are starting to see this as something that needs to be integrated throughout the entire curriculum,” McNeill said.
The top priority now is to provide states, districts, and schools with packaged materials that have been vetted by peer-reviewed research, Wineburg said. Educators should be wary of for-profit tools that have not been proven effective based on field studies in real classrooms. Still, McNeill said the current wave of digital literacy legislation is progress to be proud of.
“While we still have a lot to learn, we also know that there are risks for youth online,” McNeill said. “We have enough evidence now that there’s plenty of reason to take action.”
A new book for teens on spotting false information.
What fact-checkers know about media literacy—and students should, too Terada, Y., Edutopia , May 26, 2022
Teaching lateral reading: Interventions to help people read like fact checkers McGrew, S., Current Opinion in Psychology , 2024
Building media literacy into school curriculums worldwide Leedom, M., News Decoder , Feb. 29, 2024
Teaching digital well-being: Evidence-based resources to help youth thrive Weinstein, E., et al., Center for Digital Thriving, 2023
Fighting fake news in the classroom Pappas, S., Monitor on Psychology , January/February 2022
How to use ChatGPT as a learning tool Abramson, A., Monitor on Psychology , June 2023
Six things psychologists are talking about.
The APA Monitor on Psychology ® sister e-newsletter offers fresh articles on psychology trends, new research, and more.
Welcome! Thank you for subscribing.
Subscribe to APA’s audio podcast series highlighting some of the most important and relevant psychological research being conducted today.
Subscribe to Speaking of Psychology and download via:
Related and recent.
You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.
All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.
Original Submission Date Received: .
Find support for a specific problem in the support section of our website.
Please let us know what you think of our products and services.
Visit our dedicated information section to learn more about MDPI.
Advancing sustainable cyber-physical system development with a digital twins and language engineering approach: smart greenhouse applications.
1.1. brief history of digital twin technology, 1.2. advances in the agriculture sector, 1.3. model-driven engineering as a transformative approach in smart agriculture.
2.1. digital twins in smart agriculture, 2.1.1. digital twins in internet of things, 2.1.2. digital twins in controlled environment agriculture, 2.2. languages engineering approach in modeling and simulation systems, 2.3. domain-specific languages and the agriculture sector, 3. methodology, 3.1. domain analysis, 3.1.1. temperate fruits development and growth, 3.1.2. types of greenhouses, 3.1.3. sensors: types, number and locations.
3.2. definition of requirements of domain specific languages.
A snapshot of GreenH Flow to show a humidity monitoring and control process. |
4.1. aspect orientation and separation of concerns, 4.2. integration functionality of greenh language.
A snapshot of GreenH Design to show an example of specifying a configuration. |
A snapshot of a corresponding GreenH Flow to show the predefined context that creates a smart control rule. |
A snapshot of GreenH Design EBNF definition to extend the language with new features. |
A snapshot of GreenH Flow to show the controlling rule corresponds to the extension. |
5. definition of greenh language syntax, 5.1. formalizing greenh concrete syntax, 5.1.1. formalizing the concrete syntax of greenh design dsl.
Representing a simple monitoring system using geenh dsls.
A snapshot of GreenH Design to represent the structure of the greenhouse. |
A snapshot of GreenH Flow to represent the data flow and control in the greenhouse. |
A snapshot of the translated GreenH Twin representing the corresponding digital twin simulation system within the greenhouse. |
7.1. expert participant characteristics, 7.2. evaluation strategy and criteria, 7.3. results discussion and final remarks, 7.4. greenh verification using model checking.
The GreenH Flow DSL example used for verification. |
Institutional review board statement, informed consent statement, data availability statement, conflicts of interest.
Click here to enlarge figure
Ref. | Parameters | IoT Systems |
---|---|---|
[ ] | Temperature, humidity, light intensity, pH value, and CO level | Temperature, light control, air pollution, and soil moisture monitoring |
[ ] | Temperature and energy | Temperature control (PID) |
[ ] | Temperature and energy | Temperature control (Fuzzy logic) |
[ ] | temperature and humidity | Temperature control, humidity monitoring |
[ ] | Temperature, humidity, luminosity, and CO | Temperature control, humidity, luminosity, and CO monitoring |
[ ] | Soil moisture, humidity, and temperature | Watering management system |
[ ] | CO levels, light intensity, and humidity | Soil irrigation system and more |
Ref. | DSL | Purpose | Language | Approach | Type |
---|---|---|---|---|---|
[ ] | SESSL | Writing simulation experiments | Scala | LE | Embedded |
[ ] | ScalaTion | Writing clear, concise, and intuitive simulation programs | Scala | LE | Embedded |
[ ] | RobotML | Designing, simulating, and deploying robotic applications | DSL | MDE | Metamodel |
[ ] | SimulateIoT | Designing simulation environments for IoT systems | DSL | MDE | Metamodel |
[ ] | SimulateIoT-FIWARE | Designing simulation environments for IoT systems for FIWARE platform. | DSL | MDE | Metamodel |
[ ] | DEVS-Ruby | Modelling and simulation of discrete event system specification (DEVS) | DSL | MDE | Finite StateAutomata |
Stage No. | Apple | Strawberry |
---|---|---|
1 | Purpose: Dormancy and Bud Break; Temp: below 7 °C; Period: 3–4 months | Purpose: Planting and Root Development; Temp: 18 Day–12 Night °C; Period: 1–2 months |
2 | Purpose: Growth of Leaf and Floral Initiation; Temp: 21–24 °C; Period: 1–2 months | Purpose: Vegetative Growth; Tempe: 25 Day–12 Night °C; Period: 2–3 months |
3 | Purpose: Fruit Development and Growth; Temp: 22 Day–12 Night °C; Period: 30–40 days | Purpose: Flowering and Fruiting; Temp: 25 Day–12 Night °C; Period: 3–4 months |
4 | Purpose: Late Fruit Maturation and Harvest; Temp: 22 Day–12 Night °C; Period: 3–4 months | - |
IoT System | Parameter to Measure | Measurement Unit |
---|---|---|
Temperature change parameter | Internal heating temperature | Celsius |
Humidity change parameter | Internal humidity level | Percentage |
Light change parameter | The intensity of light on surface | Lux |
Energy consumption parameter | Electrical power consumed by the system. | Watt |
Communication parameter | Data transmission between IoT components | Bits per second |
ID | Criteria | Description |
---|---|---|
C1 | Expressiveness | Ability to model all necessary domain concepts accurately and completely. |
C2 | Readability | Clarity of the language syntax. |
C3 | Usability | Ease of use and learning for new users. |
C4 | Consistency | Uniformity in language constructs and their usage. |
C5 | Correctness | Enforcement of domain constraints to avoid invalid models. |
C6 | Scalability | Ability to handle different sizes and complexities of models. |
ID | Score (1) | Score (2) | Score (3) | Score (4) | Score (5) | Score (6) | Score (7) | Average Score | Summary of Expert Notes |
---|---|---|---|---|---|---|---|---|---|
C1 | 4 | 4 | 4 | 3 | 3.5 | 4 | 3 | 3.64 | Missing constructs for advanced scenarios in GreenH Design, such as network connection protocols and other types of smart devices. |
C2 | 4 | 5 | 4 | 4 | 4 | 3 | 3 | 3.86 | DSLs syntax is clear for target users in GreenH Design and Flow DSLs; further simplification is recommended in the Twin DSL. |
C3 | 4 | 3 | 4 | 3 | 4 | 3 | 2.5 | 3.00 | Difficulties might appear with more complex simulation scenarios. Enabling the execution engine to generate more boilerplate code is required or consideration of a template-based approach for code generation. |
C4 | 5 | 5 | 4 | 4 | 5 | 4 | 3 | 4.29 | DSLs constructs are consistent and each DSL focuses on a particular aspect of the language. |
C5 | 4 | 4 | 4 | 3.5 | 3 | 3 | 3 | 3.50 | The language is defined and restricted to a metamodel and EBNF. Further correctness tests are required in later stages of development. |
C6 | 5 | 3.5 | 4 | 4 | 4 | 3.5 | 4 | 4 | The DSLs can be scalable; advanced experiments are required |
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
Subahi, A.F. Advancing Sustainable Cyber-Physical System Development with a Digital Twins and Language Engineering Approach: Smart Greenhouse Applications. Technologies 2024 , 12 , 147. https://doi.org/10.3390/technologies12090147
Subahi AF. Advancing Sustainable Cyber-Physical System Development with a Digital Twins and Language Engineering Approach: Smart Greenhouse Applications. Technologies . 2024; 12(9):147. https://doi.org/10.3390/technologies12090147
Subahi, Ahmad F. 2024. "Advancing Sustainable Cyber-Physical System Development with a Digital Twins and Language Engineering Approach: Smart Greenhouse Applications" Technologies 12, no. 9: 147. https://doi.org/10.3390/technologies12090147
Further information, mdpi initiatives, follow mdpi.
Subscribe to receive issue release notifications and newsletters from MDPI journals
Air pollution, particularly PM2.5, has long been a critical concern for the atmospheric environment. Accurately predicting daily PM2.5 concentrations is crucial for both environmental protection and public health. This study introduces a new hybrid model within the “Decomposition-Prediction-Integration” (DPI) framework, which combines variational modal decomposition (VMD), causal convolutional neural network (CNN), bidirectional long short-term memory (BiLSTM), and attention mechanism (AM), named as VCBA, for spatio-temporal fusion of multi-site data to forecast daily PM2.5 concentrations in a city. The approach involves integrating air quality data from the target site with data from neighboring sites, applying mathematical techniques for dimensionality reduction, decomposing PM2.5 concentration data using VMD, and utilizing Causal CNN and BiLSTM models with an attention mechanism to enhance performance. The final prediction results are obtained through linear aggregation. Experimental results demonstrate that the VCBA model performs exceptionally well in predicting daily PM2.5 concentrations at various stations in Taiyuan City, Shanxi Province, China. Evaluation metrics such as RMSE, MAE, and R 2 are reported as 2.556, 1.998, and 0.973, respectively. Compared to traditional methods, this approach offers higher prediction accuracy and stronger spatio-temporal modeling capabilities, providing an effective solution for accurate PM2.5 daily concentration prediction.
This is a preview of subscription content, log in via an institution to check access.
We’re sorry, something doesn't seem to be working properly.
Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.
Explore related subjects.
No datasets were generated or analysed during the current study.
Agarwal, A., & Sahu, M. (2023). Forecasting PM(2.5) concentrations using statistical modeling for Bengaluru and Delhi regions. Environmental Monitoring and Assessment, 195 (4), 502.
Article Google Scholar
Ali, M., Khan, D. M., Alshanbari, H. M., & El-Bagoury, A.A.-A.H. (2023). Prediction of complex stock market data using an improved hybrid EMD-LSTM model. Applied Sciences, 13 (3), 1429.
Article CAS Google Scholar
Chen, G., Li, S., Knibbs, L. D., Hamm, N. A. S., Cao, W., & Li, T. (2018a). A machine learning method to estimate PM2.5 concentrations across China with remote sensing, meteorological and land use information. Science of the Total Environment, 636 , 52–60.
Chen, Z.-Y., Zhang, T.-H., Zhang, R., Zhu, Z.-M., Ou, C.-Q., & Guo, Y. (2018b). Estimating PM2.5 concentrations based on non-linear exposure-lag-response associations with aerosol optical depth and meteorological measures. Atmospheric Environment, 173 , 30–37.
Chen, Y., Huang, L., Xie, X., Liu, Z., & Hu, J. (2024). Improved prediction of hourly PM(2.5) concentrations with a long short-term memory and spatio-temporal causal convolutional network deep learning model. Science of the Total Environment, 912 , 168672.
Cheng, Y., Zhang, H., Liu, Z., Chen, L., & Wang, P. (2019). Hybrid algorithm for short-term forecasting of PM2.5 in China. Atmospheric Environment, 200 , 264–279.
Cui, X., Wang, Z., Xu, N., Wu, J., & Yao, Z. (2024). A secondary modal decomposition ensemble deep learning model for groundwater level prediction using multi-data. Environmental Modelling and Software, 175 , 105969.
Darrow, L. A., Klein, M., Sarnat, J. A., Mulholland, J. A., Strickland, M. J., & Sarnat, S. E. (2011). The use of alternative pollutant metrics in time-series studies of ambient air pollution and respiratory emergency department visits. Journal of Exposure Science and Environmental Epidemiology, 21 (1), 10–19.
Deng, Q., Yang, K., & Luo, Y. (2017). Spatiotemporal patterns of PM2.5 in the Beijing–Tianjin–Hebei region during 2013–2016. Geology, Ecology, and Landscapes, 1 (2), 95–103.
Deng, Z., Qi, X., Xu, T., & Zheng, Y. (2022). Operational scheduling of behind-the-meter storage systems based on multiple nonstationary decomposition and deep convolutional neural network for price forecasting. Computational Intelligence and Neuroscience, 2022 , 9326856.
Dong, J., Wang, Z., Wu, J., Cui, X., & Pei, R. (2024). A novel runoff prediction model based on support vector machine and gate recurrent unit with secondary mode decomposition. Water Resources Management, 38 (3), 1655–1674.
Dragomiretskiy, K., & Zosso, D. (2014). Variational mode decomposition. IEEE Transactions on Signal Processing, 62 (3), 531–544.
Du, M., Chen, Y., Liu, Y., & Yin, H. (2022). A novel hybrid method to predict PM2.5 concentration based on the SWT-QPSO-LSTM hybrid model. Computational Intelligence and Neuroscience, 2022 , 7207477.
Gao, Z., Do, K., Li, Z., Jiang, X., Maji, K. J., Ivey, C. E., & Russell, A. G. (2024). Predicting PM2.5 levels and exceedance days using machine learning methods. Atmospheric Environment, 323 , 120396.
Huang, Q.-X., Wicke, M., Adams, B., & Guibas, L. (2009). Shape decomposition using modal analysis. Computer Graphics Forum, 28 (2), 407–416.
Huang, N., Wu, Y., Cai, G., Zhu, H., Yu, C., & Jiang, L. (2019). Short-term wind speed forecast with low loss of information based on feature generation of OSVD. IEEE Access, 7 , 81027–81046.
Jeon, J. (2015). The strengths and limitations of the statistical modeling of complex social phenomenon: Focusing on SEM, path analysis, or multiple regression models. International Journal of Economics and Management Engineering, 9 (5), 1634–1642.
Google Scholar
Jiang, L., He, S., & Zhou, H. (2020). Spatio-temporal characteristics and convergence trends of PM2.5 pollution: A case study of cities of air pollution transmission channel in Beijing-Tianjin-Hebei region, China. Journal of Cleaner Production, 256 , 120631.
Jiang, Z., Che, J., & Wang, L. (2021). Ultra-short-term wind speed forecasting based on EMD-VAR model and spatial correlation. Energy Conversion and Management, 250 , 114919.
Kim, H. S., Park, I., Song, C. H., Lee, K., Yun, J. W., & Kim, H. K. (2019). Development of a daily PM10 and PM2.5 prediction system using a deep long short-term memory neural network model. Atmospheric Chemistry and Physics, 19 (20), 12935–12951.
Lian, J., Liu, Z., Wang, H., & Dong, X. (2018). Adaptive variational mode decomposition method for signal processing based on mode characteristic. Mechanical Systems and Signal Processing, 107 , 53–77.
Lin, J., & Ngiam, K. Y. (2023). How data science and AI-based technologies impact genomics. Singapore Medical Journal, 64 (1), 59–66.
Liu, Z., Ge, C., Zheng, K., Bao, S., Cui, Y., Yuan, Y., & Zhang, Y. (2024). Forecasting daily PM2.5 concentrations in Wuhan with a spatial-autocorrelation-based long short-term memory model. Atmospheric Environment, 331 , 120605.
Lyu, C., Zhao, P., Xie, J., Dong, S., Liu, J., Rao, C., & Fu, J. (2021). Electrospinning of nanofibrous membrane and its applications in air filtration: A review. Nanomaterials, 11 (6), 1501.
Peng, J., Han, H., Yi, Y., Huang, H., & Xie, L. (2022). Machine learning and deep learning modeling and simulation for predicting PM2.5 concentrations. Chemosphere, 308 , 136353.
Power, M. C., Lamichhane, A. P., Liao, D., Xu, X., Jack, C. R., & Gottesman, R. F. (2018). The association of long-term exposure to particulate matter air pollution with brain MRI findings: The ARIC Study. Environmental Health Perspectives, 126 (2), 027009.
Qiao, D. W., Yao, J., Zhang, J. W., Li, X. L., Mi, T., & Zeng, W. (2022). Short-term air quality forecasting model based on hybrid RF-IACA-BPNN algorithm. Environmental Science and Pollution Research International, 29 (26), 39164–39181.
Saide, P. E., Carmichael, G. R., Spak, S. N., Gallardo, L., Osses, A. E., Mena-Carrasco, M. A., & Pagowski, M. (2011). Forecasting urban PM10 and PM2.5 pollution episodes in very stable nocturnal conditions and complex terrain using WRF–Chem CO tracer model. Atmospheric Environment, 45 (16), 2769–2780.
Shi, G., Liu, J., & Zhong, X. (2022). Spatial and temporal variations of PM2.5 concentrations in Chinese cities during 2015–2019. International Journal of Environmental Health Research, 32 (12), 2695–2707.
Verma, A., Ranga, V., & Vishwakarma, D. K. (2023). A novel approach for forecasting PM2.5 pollution in Delhi using CATALYST. Environmental Monitoring and Assessment, 195 (12), 1457.
Vignesh, P. P., Jiang, J. H., Kishore, P. J. E., & Science, S. (2023). Predicting PM2.5 concentrations across USA using machine learning. Earth and Space Science, 10 (10), e2023EA002911.
Wang, H., Chen, Z., & Zhang, P. (2022). Spatial autocorrelation and temporal convergence of PM2.5 concentrations in Chinese cities. International Journal of Environmental Research and Public Health, 19 (21), 13942.
Wang, W., Chen, C., Liu, D., Wang, M., Han, Q., & Zhang, X. (2022). Health risk assessment of PM(2.5) heavy metals in county units of northern China based on Monte Carlo simulation and APCS-MLR. Science of the Total Environment, 843 , 156777.
Wang, H., Zhang, L., Wu, R., & Cen, Y. (2023). Spatio-temporal fusion of meteorological factors for multi-site PM2.5 prediction: A deep learning and time-variant graph approach. Environmental Research, 239 , 117286.
Wang, J., Wang, D., Zhang, F., Yoo, C., & Liu, H. (2024). Soft sensor for predicting indoor PM2.5 concentration in subway with adaptive boosting deep learning model. Journal of Hazardous Materials, 465 , 133074.
Wang, Z., Xu, N., Bao, X., Wu, J., & Cui, X. (2024). Spatio-temporal deep learning model for accurate streamflow prediction with multi-source data fusion. Environmental Modelling and Software, 178 , 106091.
Wu, J., Dong, J., Wang, Z., Hu, Y., & Dou, W. (2023). A novel hybrid model based on deep learning and error correction for crude oil futures prices forecast. Resources Policy, 83 , 103602.
Xing, H., Wang, G., Liu, C., & Suo, M. (2021). PM2.5 concentration modeling and prediction by using temperature-based deep belief network. Neural Networks, 133 , 157–165.
Yang, Y., Ruan, Z., Wang, X., Yang, Y., Mason, T. G., Lin, H., & Tian, L. (2019). Short-term and long-term exposures to fine particulate matter constituents and health: A systematic review and meta-analysis. Environmental Pollution, 247 , 874–882.
Yao, Z., Wang, Z., Wang, D., Wu, J., & Chen, L. (2023), An ensemble CNN-LSTM and GRU adaptive weighting model based improved sparrow search algorithm for predicting runoff using historical meteorological and runoff data as input. Journal of Hydrology , 625 , 129977.
Yao, Z., Wang, Z., Huang, J., Xu, N., Cui, X., & Wu, J. (2024). Interpretable prediction, classification and regulation of water quality: A case study of Poyang Lake, China. Science of the Total Environment , 951 ,175407.
Yin, J., Wang, H., Wang, N., & Wang, X. (2023). An adaptive real-time modular tidal level prediction mechanism based on EMD and Lipschitz quotients method. Ocean Engineering, 289 , 116297.
Zhang, Z., Zeng, Y., & Yan, K. (2021). A hybrid deep learning technology for PM2.5 air quality forecasting. Environmental Science and Pollution Research, 28 (29), 39409–39422.
Zhang, C., Zou, Z., Wang, Z., & Wang, J. (2023). Ensemble deep learning modeling for Chlorophyll-a concentration prediction based on two-layer decomposition and attention mechanisms. Acta Geophysica, 72 (5), 3447–3471.
Zhao, X., Zhou, W., Han, L., & Locke, D. (2019). Spatiotemporal variation in PM2.5 concentrations and their relationship with socioeconomic factors in China’s major cities. Environment International, 133 , 105145.
Zhou, Y., Chang, F.-J., Chang, L.-C., Kao, I. F., & Wang, Y.-S. (2019). Explore a deep learning multi-output neural network for regional multi-step-ahead air quality forecasts. Journal of Cleaner Production, 209 , 134–145.
Download references
This research was supported by the Foundation for Humanities and Social Sciences Research of the Ministry of Education in China in 2024: Study on dynamic multi-objective optimization mechanism of the coupled system of hydrological prediction and scheduling of terrace reservoir groups in the upper reaches of the Yangtze River.
Authors and affiliations.
College of Information, Shanghai Ocean University, Hucheng Huan Road 999, Pudong Shanghai, Shanghai, 201306, P. R. China
Xinrong Xie, Zhaocai Wang, Manli Xu & Nannan Xu
You can also search for this author in PubMed Google Scholar
Xinrong Xie: Conceptualization, Methodology, Software, Data curation, Writing—original draft. Zhaocai Wang: Methodology, Data curation, Writing—review & editing, supervision. Manli Xu: Writing—review & editing. Nannan Xu: Methodology. All authors reviewed the manuscript.
Correspondence to Zhaocai Wang .
Ethical approval.
No human participants or animal subjects were involved in this work; hence, ethics approval is not applicable.
The authors declare no competing interests.
Publisher's note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Below is the link to the electronic supplementary material.
Rights and permissions.
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Reprints and permissions
Xie, X., Wang, Z., Xu, M. et al. Daily PM2.5 concentration prediction based on variational modal decomposition and deep learning for multi-site temporal and spatial fusion of meteorological factors. Environ Monit Assess 196 , 859 (2024). https://doi.org/10.1007/s10661-024-13005-2
Download citation
Received : 11 April 2024
Accepted : 15 August 2024
Published : 29 August 2024
DOI : https://doi.org/10.1007/s10661-024-13005-2
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
IMAGES
VIDEO
COMMENTS
To that end, we propose that ET is essentially critical thinking applied to contexts of evaluation. We argue that ECB, and the field of evaluation more generally, would benefit from an explicit and transparent appropriation of well-established concepts and teaching strategies derived from the long history of work on critical thinking.
Three ways to incorporate evaluative thinking in monitoring. Monitoring and evaluation are traditionally considered to be interlinked but nevertheless distinct processes. Monitoring is an ongoing system of gathering information and tracking a project's performance using pre-selected indicators. Evaluation, by contrast, is about making overall ...
All subjects Allied Health Cardiology & Cardiovascular Medicine Dentistry Emergency Medicine & Critical Care Endocrinology & Metabolism Environmental Science General Medicine Geriatrics Infectious Diseases Medico-legal Neurology Nursing Nutrition Obstetrics & Gynecology Oncology ... Monitoring and Evaluation Training: A Systematic Approach ...
We understand that "critical thinking is a knowledge-seeking process via reasoning skills to solve problems and make decisions which allows us ... In R, we find: organization (O), monitoring (M), and evaluation (E). This instrument comprehensively, and fairly clearly, brings together essential aspects of metacognition. On one side, there is ...
By epistemically vigilant we mean evaluating and monitoring the credibility and trustworthiness of information while being aware of the potential of being misinformed (Sperber et al., 2010). Epistemic vigilance is vital to critical thinking. We draw on Kuhn's (2018) definition of critical thinking as argumentation.
Evaluation: Through producing a final report and assessing the outcome, the evaluation stage concludes the critical thinking cycle and overall process. Still, continuing open discussions among teams is one of the best ways to support and expand the team knowledge and understanding of the risk-based monitoring approach, while external ...
Monitoring and evaluation (M&E) is a critical process for assessing the performance and effectiveness of programs, projects, and policies. This process involves collecting and analyzing data on program activities, outputs, outcomes, and impact to determine whether the desired results have been achieved.
Evaluative thinking is introduced as a form of critical thinking, and the resource then goes on to describe several key considerations in applying the technique. ... Evaluation Initiative, a global network of organizations and experts supporting country governments to strengthen monitoring, evaluation, and the use of evidence in their countries
Most of us, if not all of us, use informal M&E in our everyday decision-making processes. M&E is a management process that combines the oversight (monitoring) with the assessment of choices, processes, decisions, actions, and results (evaluation). It has two main uses: internal and external (see Fig. 4.1).
Monitoring and evaluation are important parts of RBM, based on clearly defined and measurable results, processes, methodologies and tools to achieve results.M&E can be viewed as providing a set of tools to enable RBM, helping decision makers track progress and demonstrate an intervention's higher-level results.6 Results-based M&E moves from a focus on the immediate results, such as the ...
A preamble provides a useful, concise discussion of the field of systems thinking, some high-level definitions and general considerations to inform the use of the principles in evaluation. Guidance on 'Systems-in-evaluation' and each of the four inter-related principles: Interrelationships, Perspectives, Boundaries and Dynamics, include:
Critical thinking refers to the process of actively analyzing, assessing, synthesizing, evaluating and reflecting on information gathered from observation, experience, or communication. It is thinking in a clear, logical, reasoned, and reflective manner to solve problems or make decisions. Basically, critical thinking is taking a hard look at ...
Monitoring and evaluation can help fuel innovative thinking and methods for data collection. While some fields require specific methods, others are open to more unique ideas. As an example, fields that have traditionally relied on standardized tools like questionnaires, focus groups, interviews, and so on can branch out to video and photo ...
over time (monitoring); how effectively a programme was implemented and whether there are gaps between the planned and achieved results (evaluation); and whether the changes in well-being are due to the programme and to the programme alone (impact evaluation). Monitoring is a continuous process of collecting and analysing
The cognitive skills of analysis, interpretation, inference, explanation, evaluation, and of monitoring and correcting one's own reasoning are at the heart of critical thinking (APA 1990). Critical thinking not only mimics the process of scientific investigation - identifying a question, formulating a hypothesis, gathering and analyzing ...
Triangulation facilitates validation of data through cross verification from more than two sources. It tests the consistency of findings obtained through different instruments and increases the chance to control, or at least assess some of the threats or multiple causes influencing our results. Triangulation is not just about validation but ...
Planning, monitoring and evaluation are at the heart of a learning-based approach to management. Achieving collaborative, business/environmental or personal goals requires effective planning and follow-through. The plan is effectively a "route-map" from the present to the future. To plan a … Linking planning with monitoring & evaluation - closing the loop Read More »
All subjects Allied Health Cardiology & Cardiovascular Medicine Dentistry Emergency Medicine & Critical Care Endocrinology & Metabolism Environmental ... Monitoring and evaluation for thinking and working politically ... Tyrrel L, Kelly L, Roche C, et al. (2020) Uncertainty and COVID-19: A turning point for monitoring evaluation, research and ...
In summary, here are 10 of our most popular monitoring and evaluation courses. Measuring the Success of a Patient Safety or Quality Improvement Project (Patient Safety VI): Johns Hopkins University. Monitoring and Observability for Development and DevOps: IBM. Reviews & Metrics for Software Improvements: University of Alberta.
This document provides a guide to using the MEAL DPro (Monitoring, Evaluation, Accountability and Learning Digital Professional) toolkit. It is licensed under the Creative Commons Attribution-NonCommercial license. The guide acknowledges contributions from various organizations that informed its development. MEAL (Monitoring, Evaluation, Accountability and Learning) is a key part of project ...
Proven experience in monitoring and evaluation roles, preferably in the development or nonprofit sector. Strong understanding of monitoring and evaluation concepts, frameworks, and methodologies. Proficiency in quantitative and qualitative data analysis techniques and tools. Excellent analytical, critical thinking, and problem-solving skills.
Magno considered critical thinking an outcome of metacognition because critical thinking is formed through the "development and evaluation of arguments and coming up with inferences" (p. 139). Schuster ( 2019 ) conceptualized critical thinking as a "commitment to letting logic and reasoning be the driving force in guiding judgment and ...
At least 21 state legislatures have taken steps to reform K-12 media and information literacy education, with California, Delaware, Illinois, and New Jersey passing comprehensive reforms (U.S. Media Literacy Policy Report, Media Literacy Now, 2024).The largely bipartisan efforts are a response to challenges that most school curriculums do not yet address or teach—skills like sorting out ...
Respiratory failure is a common perioperative complication. The risk of respiratory failure can be reduced with effective preoperative evaluation, preventative measures, and knowledge of evidence-based management techniques. Effective preoperative screening methods include ARISCAT scoring, OSA screening, and the LAS VEGAS score (including the ASA physical status score). Evaluation by the six ...
In recent years, the integration of Internet of Things technologies in smart agriculture has become critical for sustainability and efficiency, to the extent that recent improvements have transformed greenhouse farming. This study investigated the complexity of IoT architecture in smart greenhouses by introducing a greenhouse language family (GreenH) that comprises three domain-specific ...
Air pollution, particularly PM2.5, has long been a critical concern for the atmospheric environment. Accurately predicting daily PM2.5 concentrations is crucial for both environmental protection and public health. This study introduces a new hybrid model within the "Decomposition-Prediction-Integration" (DPI) framework, which combines variational modal decomposition (VMD), causal ...