ODI Logo ODI

  • Israel-Gaza crisis
  • Ukraine-Russia war
  • Elections in Europe

Our Programmes

Three ways to incorporate evaluative thinking in monitoring.

Written by Tiina Pasanen

Monitoring and evaluation are traditionally considered to be interlinked but nevertheless distinct processes .

Monitoring is an ongoing system of gathering information and tracking a project’s performance using pre-selected indicators. Evaluation, by contrast, is about making overall assessments of a project’s effectiveness, outcomes, and whether it has met its objectives and aims. The former is usually done by project staff during the project’s lifespan, often mainly (and sometimes only) for donor reporting purposes. The latter is typically done by external evaluators towards the end of the programme.

However, over the past few years, the line between the two processes has started to blur, with ‘evaluative thinking’ increasingly creeping into monitoring processes and activities.

By evaluative thinking I simply mean ongoing questioning and analysis of (in this case) monitoring data to make evidence-informed improvements and alterations to project-related activities while the project is still running. It is about asking questions such as: ‘What do we see happening?’ ‘What is working?’ ‘What is not working and why?’

While project staff have always assessed how things are going, evaluative thinking is about making these assessments and reflections more structured and – in the case of large multi-project programmes – more systematic across projects and countries.

This is not to say end-of-project evaluations aren’t necessary. These still have value, as they give an overall ‘bigger’ picture and outsider’s perspective of the programme. It is about learning and improving ongoing programming.

There a few possible explanations for this development. It could be due to increased emphasis on learning in international development. But it might (at least partly) be a response to the increasingly common understanding that findings and recommendations from end-of-project evaluations usually come too late to be useful for the current programming. It could also reflect growing recognition that programme staff are often very capable of analysing their own work. Let’s be honest – the findings we external evaluators come up with, usually with limited involvement and time, aren’t always such revelations for those who have actually worked in a programme for a long time.

Either way, it’s a welcome shift, and we should actively encourage it. There are many ways to support programme staff to do their own analysis and reflection on monitoring data. Here are three approaches I’ve come across that may be helpful in fostering evaluative thinking:

Use learning partners to facilitate learning  

This is an approach that several funders such as DFID and The Mastercard Foundation have tested and invested in recent years. By bringing in a semi-external/internal learning partner (and not an external evaluator or a portfolio manager) the aim is, among other things, to continuously support learning and the use of monitoring and other data while the programmes are running. One strategy to support this is organising regular learning seminars or meetings where the evidence of progress is jointly analysed and discussed by programme staff, learning partners and sometimes also with funders.

Use self-assessment scorecards to rate evidence and test programme assumptions

Scorecards have traditionally been used to improve the quality of public services ( as in this example and toolkit ). But they can also be used by programmes to generate qualitative evidence and embed a culture of learning and reflection, as I discovered from a presentation given by the Making All Voices Count (MAVC) governance programme at the latest UKES conference . While there are several steps when using scorecards, the basic idea is for a project team to first gather evidence of outcomes and changes that they have seen taking place (positive or negative) and then come together and jointly reflect and rate the quality of that evidence. The aim is to foster critical thinking, test programme assumptions and jointly develop actions to improve programming.

Use outcome mapping to understand progress towards transformational changes  

RAPID’s long-time favourite, this approach can be especially useful for programmes trying to address complex issues such as women’s empowerment, research influence or advocacy. It breaks outcomes down into smaller, more manageable steps. It can help programmes to understand which changes are within their control, influence and interest. The aim is to recognise and appreciate smaller behavioural changes that can in the long run lead into more substantial transformative changes. However, the key part of the outcome mapping is same as above with self-assessment score cards: having joint sense-making at regular time points where programme teams analyse the data collected and adapt their plans and strategies based on the evidence.

Whatever approaches a programme chooses to use, it’s also worth reiterating that joint reflection really is key. The common thread running through the tools and approaches above is the importance of regular and structured reflection meetings where programme data is jointly analysed.

This sounds simple but it is not easy. Anyone who has ever worked in international development knows how difficult it is to have the head space and time for joint reflection. It can be very resource intensive to bring people together from several teams and countries. Where there are a lot of partners involved, each doing their own thing, there’s also always the danger of merely promoting one’s work and sharing only success stories. Sharing what doesn’t work requires trust, and trust takes time to appear. But when facilitated well and conducted in a safe place, joint reflection can create healthy debate on strategic and technical issues (what works and doesn’t work), foster knowledge sharing across projects and trigger new ideas and solutions.

Tiina Pasanen

Research Associate

Portrait of Tiina Pasanen

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Psychol

Metacognitive Strategies and Development of Critical Thinking in Higher Education

Silvia f. rivas.

1 Departamento de Psicología Básica, Psicobiología y Metodología de CC, Facultad de Psicología, Universidad de Salamanca, Salamanca, Spain

Carlos Saiz

Carlos ossa.

2 Departamento de Ciencias de la Educación, Facultad de Educación y Humanidades, Universidad del Bío-Bío, Sede Chillán, Chile

Associated Data

The original contributions presented in the study are included in the article/supplementary material; further inquiries can be directed to the corresponding author.

More and more often, we hear that higher education should foment critical thinking. The new skills focus for university teaching grants a central role to critical thinking in new study plans; however, using these skills well requires a certain degree of conscientiousness and its regulation. Metacognition therefore plays a crucial role in developing critical thinking and consists of a person being aware of their own thinking processes in order to improve them for better knowledge acquisition. Critical thinking depends on these metacognitive mechanisms functioning well, being conscious of the processes, actions, and emotions in play, and thereby having the chance to understand what has not been done well and correcting it. Even when there is evidence of the relation between metacognitive processes and critical thinking, there are still few initiatives which seek to clarify which process determines which other one, or whether there is interdependence between both. What we present in this study is therefore an intervention proposal to develop critical thinking and meta knowledge skills. In this context, Problem-Based Learning is a useful tool to develop these skills in higher education. The ARDESOS-DIAPROVE program seeks to foment critical thinking via metacognition and Problem-Based Learning methodology. It is known that learning quality improves when students apply metacognition; it is also known that effective problem-solving depends not only on critical thinking, but also on the skill of realization, and of cognitive and non-cognitive regulation. The study presented hereinafter therefore has the fundamental objective of showing whether instruction in critical thinking (ARDESOS-DIAPROVE) influences students’ metacognitive processes. One consequence of this is that critical thinking improves with the use of metacognition. The sample was comprised of first-year psychology students at Public University of the North of Spain who were undergoing the aforementioned program; PENCRISAL was used to evaluate critical thinking skills and the Metacognitive Activities Inventory (MAI) for evaluating metacognition. We expected an increase in critical thinking scores and metacognition following this intervention. As a conclusion, we indicate actions to incentivize metacognitive work among participants, both individually via reflective questions and decision diagrams, and at the interactional level with dialogues and reflective debates which strengthen critical thinking.

Introduction

One of the principal objectives which education must cover is helping our students become autonomous and effective. Students’ ability to use strategies which help them direct their motivation toward action in the direction of the meta-proposal is a central aspect to keep at the front of our minds when considering education. This is where metacognition comes into play—knowledge about knowledge itself, a component which is in charge of directing, monitoring, regulating, organizing, and planning our skills in a helpful way, once these have come into operation. Metacognition helps form autonomous students, increasing consciousness about their own cognitive processes and their self-regulation so that they can regulate their own learning and transfer it to any area of their lives. As we see, it is a conscious activity of high-level thinking which allows us to look into and reflect upon how we learn and to control our own strategies and learning processes. We must therefore approach a problem which is increasing in our time, that of learning and knowledge from the perspective of active participation by students. To achieve these objectives of “learning to learn” we must use adequate cognitive learning strategies, among which we can highlight those oriented toward self-learning, developing metacognitive strategies, and critical thinking.

Metacognition is one of the research areas, which has contributed the most to the formation of the new conceptions of learning and teaching. In this sense, it has advanced within the constructivist conceptions of learning, which have attributed an increasing role to student consciousness and to the regulation which they exercise over their own learning ( Glaser, 1994 ).

Metacognition was initially introduced by John Flavell in the early 1970s. He affirmed that metacognition, on one side, refers to “the knowledge which one has about his own cognitive processes products, or any other matter related with them” and on the other, “to the active supervision and consequent regulation and organization of these processes in relation with the objects or cognitive data upon which they act” ( Flavell, 1976 ; p. 232). Based on this, we can differentiate two components of metacognition: one of a declarative nature, which is metacognitive knowledge, referring to knowledge of the person and the task, and another of a procedural nature, which is metacognitive control or self-regulated learning, which is always directed toward a goal and controlled by the learner.

Different authors have pointed out that metacognition presents these areas of thought or skills, aimed knowledge or toward the regulation of thought and action, mainly proposing a binary organization in which attentional processes are oriented, on occasions, toward an object or subject, and the other hand, toward to interact with objects and/or subjects ( Drigas and Mitsea, 2021 ). However, it is possible to understand metacognition from another approach that establishes more levels of use of metacognitive thinking to promote knowledge, awareness, and intelligence, known as the eight pillars of metacognition model ( Drigas and Mitsea, 2020 ). These pillars allow thought to promote the use of deep knowledge, cognitive processes, self-regulation, functional adaptation to society, pattern recognition and operations, and even meaningful memorization ( Drigas and Mitsea, 2020 ).

In addition to the above, Drigas and Mitsea’s model establishes different levels where metacognition could be used, in a complex sequence from stimuli to transcendental ideas, in which each of the pillars could manifest a different facet of the process metacognitive, thus establishing a dialectical and integrative approach to learning and knowledge, allowing it to be understood as an evolutionary and complex process in stages ( Drigas and Mitsea, 2021 ).

All this clarifies the importance of and need for metacognition, not only in education but also in our modern society, since this need to “teach how to learn” and the capacity to “learn how to learn” in order to achieve autonomous learning and transfer it to any area of our lives will let us face problems more successfully. This becomes a relevant challenge, especially today where it is required to have a broad view regarding reflection and consciousness, and to transcend simplistic and reductionist models that seek to center the problem of knowledge only around the neurobiological or the phenomenological scope ( Sattin et al., 2021 ).

Critical thinking depends largely on these mechanisms functioning well and being conscious of the processes used, since this gives us the opportunity to understand what has not been done well and correct it in the future. Consciousness for critical thinking would imply a continuous process of reuse of thought, in escalations that allow thinking to be oriented both toward the objects of the world and toward the subjective interior, allowing to determine the ideas that give greater security to the person, and in that perspective, the metacognitive process, represents this use of Awareness, also allowing the generation of an identity of knowing being ( Drigas and Mitsea, 2021 ).

We know that thinking critically involves reasoning and deciding to effectively solve a problem or reach goals. However, effective use of these skills requires a certain degree of consciousness and regulation of them. The ARDESOS-DIAPROVE program seeks precisely to foment critical thinking, in part, via metacognition ( Saiz and Rivas, 2011 , 2012 , 2016 ).

However, it is not only centered on developing cognitive components, as this would be an important limitation. Since the 1990s, it has been known that non-cognitive components play a crucial role in developing critical thinking. However, there are few studies focusing on this relation. This intervention therefore considers both dimensions, where metacognitive processes play an essential role by providing evaluation and control mechanisms over the cognitive dimension.

Metacognition and Critical Thinking

Critical Thinking is a concept without a firm consensus, as there have been and still are varying conceptions regarding it. Its nature is so complex that it is hard to synthesize all its aspects in a single definition. While there are numerous conceptions about critical thinking, it is necessary to be precise about which definition we will use. We understand that “ critical thinking is a knowledge-seeking process via reasoning skills to solve problems and make decisions which allows us to more effectively achieve our desired results” ( Saiz and Rivas, 2008 , p. 131). Thinking effectively is desirable in all areas of individual and collective action. Currently, the background of the present field of critical thinking is also based in argumentation. Reasoning is used as the fundamental basis for all activities labeled as thinking. In a way, thinking cannot easily be decoupled from reasoning, at least if our understanding of it is “deriving something from another thing.” Inference or judgment is what we essentially find behind the concept of thinking. The question, though, is whether it can be affirmed that thinking is only reasoning. Some defend this concept ( Johnson, 2008 ), while others believe the opposite, that solving problems and making decisions are activities which also form part of thinking processes ( Halpern, 2003 ; Halpern and Dunn, 2021 , 2022 ). To move forward in this sense, we will return to our previous definition. In that definition, we have specified intellectual activity with a goal intrinsic to all mental processes, namely, seeking knowledge. Achieving our ends depends not only on the intellectual dimension, as we may need our motor or perceptive activities, so it contributes little to affirm that critical thinking allows us to achieve our objectives as we can also achieve them by doing other activities. It is important for us to make an effort to identify the mental processes responsible for thinking and distinguish them from other things.

Normally, we think to solve our problems. This is the second important activity of thought. A problem can be solved by reasoning, but also by planning course of action or selecting the best strategy for the situation. Apart from reasoning, we must therefore also make decisions to resolve difficulties. Choosing is one of the most frequent and important activities which we do. Because of this, we prefer to give it the leading role it deserves in a definition of thinking. Solving problems demands multiple intellectual activities, including reasoning, deciding, planning, etc. The final characteristic goes beyond the mechanisms peculiar to inference. What can be seen at the moment of delineating what it means to think effectively is that concepts are grouped together which go beyond the nuclear ideas of what has to do with inferring or reasoning. The majority of theoreticians in the field ( APA, 1990 ; Ennis, 1996 ; Halpern, 1998 , 2003 ; Paul and Elder, 2001 ; Facione, 2011 ; Halpern and Dunn, 2021 , 2022 ) consider that, in order to carry out this type of thinking effectively, apart from having this skill set, the intervention of other types of components is necessary, such as metacognition and motivation. This is why we consider it necessary to speak about the components of critical thinking, as we can see in Figure 1 :

An external file that holds a picture, illustration, etc.
Object name is fpsyg-13-913219-g001.jpg

Components of critical thinking ( Saiz, 2020 ).

In the nature of thinking, there are two types of components: the cognitive and the non-cognitive. The former include perception, learning, and memory processes. Learning is any knowledge acquisition mechanism, the most important of which is thinking. The latter refer to motivation and interests (attitudes tend to be understood as dispositions, inclinations…something close to motives); with metacognition remaining as a process which shares cognitive and non-cognitive aspects as it incorporates aspects of both judgment (evaluation) and disposition (control/efficiency) about thoughts ( Azevedo, 2020 ; Shekhar and Rahnev, 2021 ). Both the cognitive and non-cognitive components are essential to improve critical thinking, as one component is incomplete without the other, that is, neither cognitive skills nor dispositions on their own suffice to train a person to think critically. In general, relations are bidirectional, although for didactic reasons only unidirectional relations appear in Figure 1 ( Rivas et al., 2017 ). This is because learning is a dynamic process which is subject to all types of influence. For instance, if a student is motivated, they will work more and better—or at least, this is what is hoped for. If they can achieve good test scores as well, it can be supposed that motivation is reinforced, so that they will continue existing behaviors in the same direction that is, working hard and well on their studies. This latter point appears to arise at least because of an adjustment between expectations and reality which the student achieves thanks to metacognition, which allows them to effectively attribute their achievements to their efforts ( Ugartetxea, 2001 ).

Metacognition, which is our interest in this paper, should also have bidirectional relations with critical thinking. Metacognition tends to be understood as the degree of consciousness which we have about our own mental processes and similar to the capacity for self-regulation, that is, planning and organization ( Mayor et al., 1993 ). We observe that these two ideas have very different natures. The former is simpler, being the degree of consciousness which we reach about an internal mechanism or process. The latter is a less precise idea, since everything which has to do with self-regulation is hard to differentiate from a way of understanding motivation, such as the entire tradition of intrinsic motivation and self-determination from Deci, his collaborators, and other authors of this focus (see, e.g., Deci and Ryan, 1985 ; Ryan and Deci, 2000 ). The important thing is to emphasize the executive dimension of metacognition, more than the degree of consciousness, for practical reasons. It can be expected that this dimension has a greater influence on the learning process than that of consciousness, although there is little doubt that we have to establish both as necessary and sufficient conditions. However, the data must speak in this regard. Due to all of this, and as we shall see hereinafter, the intervention designed incorporates both components to improve critical thinking skills.

We can observe, though, that the basic core of critical thinking continues to be topics related to skills, in our case, reasoning, problem-solving, and decision-making. The fact that we incorporate concepts of another nature, such as motivation, in a description of critical thinking is justified because it has been proven that, when speaking about critical thinking, the fact of centering solely on skills does not allow for fully gathering its complexity. The purpose of the schematic in Figure 2 is to provide conceptual clarity to the adjective “critical” in the expression critical thinking . If we understand critical to refer to effective , we should also consider that effectiveness is not, as previously mentioned, solely achieved with skills. They must be joined together with other mechanisms during different moments. Intellectual skills alone cannot achieve the effectiveness assumed within the term “critical.” First, for said skills to get underway, we must want to do so. Motivation therefore comes into play before skills and puts them into operation. For its part, metacognition allows us to take advantage of directing, organizing, and planning our skills and act once they have begun to work. Motivation thus activates our abilities, while metacognition lets them be more effective. The final objective should always be to gain proper knowledge of reality to resolve our problems.

An external file that holds a picture, illustration, etc.
Object name is fpsyg-13-913219-g002.jpg

Purpose of critical thinking ( Saiz, 2020 , p.27).

We consider that the fact of referring to components of critical thinking while differentiating the skills of motivation and metacognition aids with the conceptual clarification we seek. On one side, we specify the skills which we discuss, and on another, we mention which other components are related to, and even overlap with them. We must be conscious of how difficult it is to find “pure” mental processes. Planning a course of action, an essential trait of metacognition, demands reflection, prediction, choice, comparison, and evaluation… And this, evidently, is thinking. The different levels or dimensions of our mental activity must be related and integrated. Our aim is to be able to identify what is substantial in thinking to know what we are able to improve and evaluate.

It is widely known that for our personal and professional functioning, thinking is necessary and useful. When we want to change a situation or gain something, all our mental mechanisms go into motion. We perceive the situation, identify relevant aspects of the problem, analyze all the available information, and appraise everything we analyze. We make judgments about the most relevant matters, decide about the options or pathways for resolution, execute the plan, obtain results, evaluate the results, estimate whether we have achieved our purpose and, according to the level of satisfaction following this estimation, consider our course of action good, or not.

The topic we must pose now is what things are teachable. It is useful to specify that what is acquired is clearly cognitive and some of the non-cognitive, because motivation can be stimulated or promoted, but not taught. The concepts of knowledge and wisdom are its basis. Mental representation and knowledge only become wisdom when we can apply it to reality, when we take it out of our mind and adequately situate it in the world. For our teaching purposes, we only have to take a position about whether knowledge is what makes critical thinking develop, or vice versa. For us, skills must be directly taught, and dominion is secondary. Up to now, we have established the components of critical thinking, but these elements still have to be interrelated properly. What we normally find are skills or components placed side by side or overlapping, but not the ways in which they influence each other. Lipman (2003) may have developed the most complete theory of critical and creative thinking, along Paul and his group, in second place, with their universal thought structures ( Paul and Elder, 2006 ). However, a proposal for the relation between the elements is lacking.

To try to explain the relation between the components of thought, we will use Figure 2 as an aid.

The ultimate goal of critical thinking is change that is, passing from one state of wellbeing into a better state. This change is only the fruit of results, which must be the best. Effectiveness is simple achieving our goals in the best way possible. There are many possible results, but for our ends, there are always some which are better than others. Our position must be for effectiveness, the best response, the best solution. Reaching a goal is resolving or achieving something, and for this, we have mechanisms available which tell us which are the best course of action. Making decisions and solving problems are fundamental skills which are mutually interrelated. Decision strategies come before a solution. Choosing a course of action always comes before its execution, so it is easy to understand that decisions contribute to solutions.

Decisions must not come before reflection, although this often can and does happen. As we have already mentioned, the fundamental skills of critical thinking, in most cases, have been reduced to reasoning, and to a certain degree, this is justified. There is an entire important epistemological current behind this, within which the theory of argumentation makes no distinction, at least syntactically, between argumentation and explanation. However, for us this distinction is essential, especially in practice ( Saiz, 2020 ). We will only center on an essential difference for our purpose. Argumentation may have to do with values and realities, but explanation only has to do with the latter. We can argue about beliefs, convictions, and facts, but we can only explain realities. Faced with an explanation of reality, any argumentation would be secondary. Thus, explanation will always be the central skill in critical thinking.

The change which is sought is always expressed in reality. Problems always are manifested and resolved with actions, and these are always a reality. An argument about realities aids in explaining them. An argument about values upholds a belief or a conviction. However, beliefs always influence behavior; thus, indirectly, the argument winds up being about realities. One may argue, for example, only for or against the death penalty, and reach the conviction that it is good or bad and ultimately take a position for or against allowing it. This is why we say that deciding always comes before resolving; furthermore, resolution always means deciding about something in a particular direction—it always means choosing and taking an option; furthermore, deciding is often only from two possibilities, the better or that which is not better, or which is not as good. Decisions are made based on the best option possible of all those which can be presented. Resolution is a dichotomy. Since our basic end lies within reality, explanation must be constituted as the basic pillar to produce change. Argumentation must therefore be at the service of causality (explanation), and both must be in the service of solid decisions leading us to the best solution or change of situation. We now believe that the relation established in Figure 2 can be better understood. From this relation, we propose that thinking critically means reaching the best explanation for an event, phenomenon, or problem in order to know how to effectively resolve it ( Saiz, 2017 , p.19). This idea, to our judgment, is the best summary of the nature of critical thinking. It clarifies details and makes explicit the components of critical thinking.

Classroom Activities to Develop Metacognition

We will present a set of strategies to promote metacognitive work in the classroom in this section, aimed at improving critical thinking skills. These strategies can be applied both at the university level and the secondary school level; we will thus focus on these two levels, although metacognitive strategies can be worked on from an earlier age ( Jaramillo and Osses, 2012 ; Tamayo-Alzate et al., 2019 ) and some authors have indicated that psychological maturity has a greater impact on effectively achieving metacognition ( Sastre-Riba, 2012 ; García et al., 2016 ).

At the individual level, metacognition can be worked on via applying questions aimed at the relevant tasks which must be undertaken regarding a task (meta-knowledge questions), for example:

  • Do I know how much I know about this subject?
  • Do I have clear instructions and know what action is expected from me?
  • How much time do I have?
  • Am I covering the proper and necessary subjects, or is there anything important left out?
  • How do I know that my work is right?
  • Have I covered every point of the rubric for the work to gain a good grade or a sufficient level?

These reflective questions facilitate supervising knowledge level, resource use, and the final product achieved, so that the decisions taken for said activities are the best and excellent learning results are achieved.

Graphs or decision diagrams can also be used to aid in organizing these questions during the different phases of executing a task (planning, progress, and final evaluation), which is clearly linked with the knowledge and control processes of metacognition ( Mateos, 2001 ). These diagrams are more complex and elaborate strategies than the questions, but are effective when monitoring the steps considered in the activity ( Ossa et al., 2016 ). Decision diagrams begin from a question or task, detailing the principal steps to take, and associating an alternative (YES or NO) to each step, which leads to the next step whenever the decision is affirmative, or to improve or go further into the step taken if the decision is negative.

Finally, we can work on thinking aloud, a strategy which facilitates making the thoughts explicit and conscious, allowing us to monitor their knowledge, decisions, and actions to promote conscious planning, supervision and evaluation ( Ávila et al., 2017 ; Dahik et al., 2019 ). For example:

  • While asking a question, the student thinks aloud: I am having problems with this part of the task, and I may have to ask the teacher to know whether I am right.

Thinking aloud can be done individually or in pairs, allowing for active monitoring of decisions and questions arising from cognitive and procedural work done by the student.

Apart from the preceding strategies, it is also possible to fortify metacognitive development via personal interactions based on dialogue between both the students themselves and between the teacher and individual students. One initial strategy, similar to thinking out loud in pairs, is reflective dialogue between teacher and student, a technique which allows for exchanging deep questions and answers, where the student becomes conscious of their knowledge and practice thanks to dialogical interventions by the teacher ( Urdaneta, 2014 ).

Reflective dialogue can also be done via reflective feedback implemented by the teacher for the students to learn by themselves about the positive and negative aspects of their performance on a task.

Finally, another activity based on dialogue and interaction is related to metacognitive argumentation ( Sánchez-Castaño et al., 2015 ), a strategy which uses argumentative resources to establish a valid argumentative structure to facilitate responding to a question or applying it to a debate. While argumentative analysis is based on logic and the search for solid reasons, these can have higher or lower confidence and reliability as a function of the data which they provide. Thus, if a reflective argumentative process is performed, via questioning reasons or identifying counterarguments, there is more depth and density in the argumentative structure, achieving greater confidence and validity.

We can note that metacognition development strategies are based on reflective capacity, which allow thought to repeatedly review information and decisions to consider, without immediately taking sides or being carried away by superficial or biased ideas or data. Critical thought benefits strongly from applying this reflective process, which guides both data management and cognitive process use. These strategies can also be developed in various formats (written, graphic, oral, individual, and dialogical), providing teachers a wide range of tools to strengthen learning and thinking.

Metacognitive Strategies to Improve Critical Thinking

In this section, we will describe the fundamental metacognitive strategies addressed in our critical thinking skills development program ARDESOS-DIAPROVE.

First, one of the active learning methodologies applied is Problem-Based Learning (PBL). This pedagogical strategy is student-centered and encourages autonomous and participative learning, orienting students toward more active and decisive learning. In PBL each situation must be approached as a problem-solving task, making it necessary to investigate, understand, interpret, reason, decide, and resolve. It is presented as a methodology which facilitates joint knowledge acquisition and skill learning. It is also good for working on daily problems via relevant situations, considerably reducing the distance between learning context and personal/professional life and aiding the connection between theory and practice, which promote the highly desired transference. It favors organization and the capacity to decide about problem-solving, which also improves performance and knowledge about the students’ own learning processes. Because of all this, this methodology aids in reflection and analysis processes, which in turn promotes metacognitive skill development.

The procedure which we carried out in the classroom with all the activities is based on the philosophy of gradual learning control transference ( Mateos, 2001 ). During instruction, the teacher takes on the role of model and guide for students’ cognitive and metacognitive activity, gradually bringing them into participating in an increasing level of competency, and slowly withdrawing support in order to attain control over the students’ learning process. This methodology develops in four phases: (1) explicit instruction, where the teacher directly explains the skills which will be worked on; (2) guided practice, where the teacher acts as a collaborator to guide and aid students in self-regulation; and (3) cooperative practice, where cooperative group work facilitates interaction with a peer group collaborating to resolve the problem. By explaining, elaborating, and justifying their own points of view and alternative solutions, greater consciousness, reflection, and control over their own cognitive processes is promoted. Finally, (4) individual practice is what allows students to place their learning into practice in individual evaluation tasks.

Regarding the tasks, it is important to highlight that the activities must be aimed not only at acquiring declarative knowledge, but also at procedural knowledge. The objective of practical tasks, apart from developing fundamental knowledge, is to develop CT skills among students in both comprehension and expression in order to favor their learning and its transference. The problems used must be common situations, close to our students’ reality. The important thing in our task of teaching critical thinking is its usefulness to our students, which can only be achieved during application since we only know something when we are capable of applying it. We are not interested in students merely developing critical skills; they must also be able to generalize their intellectual skills, for which they must perceive them as useful in order to want to acquire them. Finally, they will have to actively participate to apply them to solving problems. Furthermore, if we study the different ways of reasoning without context, via overly academic problems, their application to the personal sphere becomes impossible, leading them to be considered hardly useful. This makes it important to contextualize skills within everyday problems or situations which help us get students to use them regularly and understand their usefulness.

Reflecting on how one carries things out in practice and analyzing mistakes are ways to encourage success and autonomy in learning. These self-regulation strategies are the properly metacognitive part of our study. The teacher has various resources to increase these strategies, particularly feedback oriented toward task resolution. Similarly, one of the most effective instruments to achieve it is using rubrics, a central tool for our methodology. These guides, used in student performance evaluations, describe the specific characteristics of a task at various performance levels, in order to clarify expectations for students’ work, evaluate their execution, and facilitate feedback. This type of technique also allows students to direct their own activity. We use them with this double goal in mind; on the one hand, they aid students in carrying out tasks, since they help divide the complex tasks they have to do into simpler jobs, and on the other, they help evaluate the task. Rubrics guide students in the skills and knowledge they need to acquire as well as facilitating self-evaluation, thereby favoring responsibility in their learning. Task rubrics are also the guide for evaluation which teachers carry out in classrooms, where they specify, review, and correctly resolve the tasks which students do according to the rubric criteria. Providing complete feedback to students is a crucial aspect for the learning process. Thus, in all sessions time is dedicated to carrying it out. This is what will allow them to move ahead in self-regulated skill learning.

According to what we have seen, there is a wide range of positions when it comes to defining critical thinking. However, there is consensus in the fact that critical thinking involves cognitive, attitudinal, and metacognitive components, which together favor proper performance in critical thinking ( Ennis, 1987 ; Facione, 1990 ). This important relation between metacognition and critical thinking has been widely studied in the literature ( Berardi-Coletta et al., 1995 ; Antonietti et al., 2000 ; Kuhn and Dean, 2004 ; Black, 2005 ; Coutinho et al., 2005 ; Orion and Kali, 2005 ; Schroyens, 2005 ; Akama, 2006 ; Choy and Cheah, 2009 ; Magno, 2010 ; Arslan, 2014 ) although not always in an applied way. Field studies indicate the existence of relations between teaching metacognitive strategies and progress in students’ higher-order thinking processes ( Schraw, 1998 ; Kramarski et al., 2002 ; Van der Stel and Veenman, 2010 ). Metacognition is thus considered one of the most relevant predictors of achieving a complex higher-order thought process.

Along the same lines, different studies show the importance of developing metacognitive skills among students as it is related not only with developing critical thinking, but also with academic achievement and self-regulated learning ( Klimenko and Alvares, 2009 ; Magno, 2010 ; Doganay and Demir, 2011 ; Özsoy, 2011 ). Klimenko and Alvares (2009) indicated that one way for students to acquire necessary tools to encourage autonomous learning is making cognitive and metacognitive strategies explicit and well-used and that teachers’ role is to be mediators and guides. Inspite of this evidence, there is less research about the use of metacognitive strategies in encouraging critical thinking. The principal reason is probably that it is methodologically difficult to gather direct data about active metacognitive processes which are complex by nature. Self-reporting is also still very common in metacognition evaluation, and there are few studies which have included objective measurements aiding in methodological precision for evaluating metacognition.

However, in recent years, greater importance has been assigned to teaching metacognitive skills in the educational system, as they aid students in developing higher-order thinking processes and improving their academic success ( Flavell, 2004 ; Larkin, 2009 ). Because of this, classrooms have seen teaching and learning strategies emphasizing metacognitive knowledge and regulation. Returning to our objective, which is to improve critical thinking via the ARDESOS-DIAPROVE program, we have achieved our goal in an acceptable way ( Saiz and Rivas, 2011 , 2012 , 2016 ).

However, we need to know which specific factors contribute to this improvement. We have covered significant ground through different studies, one of which we present here. In this one, we attempt to find out the role of metacognition in critical thinking. This is the central objective of the study. Our program includes motivational and metacognitive variables. Therefore, we seek to find out whether metacognition improves after this instruction program focused on metacognition. Therefore, our hypothesis is simple: we expect that the lesson will improve our students’ metacognition. The idea is to know whether applying metacognition helps us achieve improved critical thinking and whether after this change metaknowledge itself improves. In other words, improved critical thinking performance will make us think better about thinking processes themselves. If this can be improved, we can expect that in the future it will have a greater influence on critical thinking. The idea is to be able to demonstrate that applying specifically metacognitive techniques, the processes themselves will subsequently improve in quality and therefore contribute better volume and quality to reasoning tasks, decision-making and problem-solving.

Materials and Methods

Participants.

In the present study, we used a sample of 89 students in a first-year psychology course at Public University of the North of Spain. 82% (73) were women, and the other 18% (16) were men. Participants’ median age was 18.93 ( SD 1.744).

Instruments

Critical thinking test.

To measure critical thinking skills, we applied the PENCRISAL test ( Saiz and Rivas, 2008 ; Rivas and Saiz, 2012 ). The PENCRISAL is a battery consisting of 35 production problem situations with an open-answer format, composed of five factors: Deductive Reasoning , Inductive Reasoning , Practical Reasoning , Decision-Making , and Problem-Solving , with seven items per factor. Items for each factor gather the most representative structures of fundamental critical thinking skills.

The items’ format is open, so that the person has to answer a concrete question, adding a justification for the reasons behind their answer. Because of this, there are standardized correction criteria assigning values between 0 and 2 points as a function of answer quality. This test offers us a total score of critical thinking skills and another five scores referring to the five factors. The value range is located between 0 and 72 points as a maximum limit for total test scoring, and between 0 and 14 for each of the five scales. The reliability measures present adequate precision levels according to the scoring procedures, with the lowest Cronbach’s alpha values at 0.632, and the test–retest correlation at 0.786 ( Rivas and Saiz, 2012 ). PENCRISAL administration was done over the Internet via the evaluation platform SelectSurvey.NET V5: http://24.selectsurvey.net/pensamiento-critico/Login.aspx .

Metacognitive Skill Inventory

Metacognitive skill evaluation was done via the metacognitive awareness inventory from Schraw and Dennison (1994) (MAI; Huertas Bustos et al., 2014 ). This questionnaire has 52 Likert scale-type items with five points. The items are distributed in two general dimensions: cognitive knowledge (C) and regulation of cognition (R). This provides ample coverage for the two aforementioned ideas about metaknowledge. There are also eight defined subcategories within each general dimension. For C, these are: declarative knowledge (DK), procedural knowledge (PK), and conditional knowledge (CK). In R, we find: organization (O), monitoring (M), and evaluation (E). This instrument comprehensively, and fairly clearly, brings together essential aspects of metacognition. On one side, there is the level of consciousness, containing types of knowledge—declarative, procedural, and strategic. On the other, it considers everything important in the processes of self-regulation, planning, organization, direction or control (monitoring), adjustment (troubleshooting), and considering the results achieved (evaluation). It provides a very complete vision of everything important in this dimension. Cronbach’s alpha for this instrument is 0.94, showing good internal consistency.

Intervention Program

As previously mentioned, in this study, we applied the third version of the ARDESOS_DIAPROVE program ( Saiz and Rivas, 2016 ; Saiz, 2020 ), with the objective of improving thinking skills. This program is centered on directly teaching the skills which we consider essential to develop critical thinking and for proper performance in our daily affairs. For this, we must use reasoning and good problem-solving and decision-making strategies, with one of the most fundamental parts of our intervention being the use of everyday situations to develop these abilities.

DIAPROVE methodology incorporates three new and essential aspects: developing observation, the combined use of facts and deduction, and effective management of de-confirmation procedures, or discarding hypotheses. These are the foundation of our teaching, which requires specific teaching–learning techniques.

The intervention took place over 16 weeks and is designed to be applied in classrooms over a timeframe of 55–60 h. The program is applied in classes of around 30–35 students divided into groups of four for classwork in collaborative groups, and organized into six activity blocks: (1) nature of critical thinking, (2) problem-solving and effectiveness, (3) explanation and causality, (4) deduction and explanation, (5) argumentation and deduction, and (6) problem-solving and decision-making. These blocks are assembled maintaining homogeneity, facilitating a global integrated skill focus which helps form comprehension and use of the different structures in any situation as well as a greater degree of ability within the domain of each skill.

Our program made an integrated use of problem-based learning (PBL) and cooperative learning (CL) as didactic teaching and learning strategies in the critical thinking program. These methodologies jointly exert a positive influence on the students, allowing them to participate more actively in the learning process, achieve better results in contextualizing content and developing skills and abilities for problem-solving, and improve motivation.

To carry out our methodology in the classrooms, we have designed a teaching system aligned with these directives. Two types of tasks are done: (1) comprehension and (2) production. The materials we used to carry out these activities are the same for all the program blocks. One key element in our aim of teaching how to think critically must be its usefulness to our students, which is only achieved through application. This makes it important to contextualize reasoning types within common situations or problems, aiding students to use them regularly and understand their usefulness. Our intention with the materials we use is to face the problems of transference, usefulness, integrated skills, and how to produce these things. Accordingly, the materials used for the tasks are: (1) common situations and (2) professional/personal problems.

The tasks which the students perform take place over a week. They work in cooperative groups in class, and then review, correct, and clarify together, promoting reflection on their achievements and errors, which fortifies metacognition. Students get the necessary feedback on the work performed which will help them progressively acquire fundamental procedural contents. Our goal here is that students become conscious of their own thought processes in order to improve them. In this way, via the dialogue achieved between teachers and students as well as between the students themselves in their cooperative work, metacognition is developed. For conscious performance of tasks, the students will receive rubrics for each and every task to guide them in their completion.

Application of the ARDESOS-DIAPROVE program was done across a semester in the Psychology Department of the Public University of the North of Spain. One week before teaching began; critical thinking and metacognition evaluations were done. This was also done 1 week after the intervention ended, in order to gather the second measurement for PENCRISAL and MAI. The timelapse between the pre-treatment and post-treatment measurements was 4 months. The intervention was done by instructors with training and good experience in the program.

To test our objective, we used a quasi-experimental pre-post design with repeated measurements.

Statistical Analysis

For statistical analysis, we used the IBM SPSS Statistics 26 statistical packet. The statistical tools and techniques used were: frequency and percentage tables for qualitative variables, exploratory and descriptive analysis of quantitative variables with a goodness of fit test to the normal Gaussian model, habitual descriptive statistics (median, SD, etc.) for numerical variables, and Student’s t -tests for significance of difference.

To begin, a descriptive analysis of the study variables was carried out. Tables 1 , ​ ,2 2 present the summary of descriptions for the scores obtained by students in the sample, as well as the asymmetry and kurtosis coefficients for their distribution.

Description of critical thinking measurement (PENCRISAL).

Variables Min.Max.Median AsymKurt.K-S
p-sig. (exact)
TOT_PRE89113725.145.436−0.257−0.1970.309
RD_PRE89082.971.8150.279−0.3870.036
RI_PRE892144.211.6272.7713.980.000
RP_PRE891115.692.2480.186−0.3700.302
TD_PRE892116.231.7960.118−0.1690.067
SP_PRE891116.012.058−0.447−0.2620.015
TOT_POST89164232.625.763−0.8070.4470.161
RD_POST890104.812.189−0.069−0.6920.059
RI_POST89295.371.5470.031−0.2870.016
RP_POST890128.272.295−0.8181.1980.056
TD_POST893117.821.748−0.5400.1170.033
SP_POST892106.681.812−0.6170.5080.027

TOT_PRE, PENCRISAL pre-test; RD_PRE, Deductive reasoning pre-test; RI_PRE, Inductive reasoning pre-test; RP_PRE, Practical reasoning pre-test; TD_PRE, Decision making pre-test; SP_PRE, Problem solving pre-test; TOT_POST, PENCRISAL post-test; RD_ POST, Deductive reasoning post-test; RI_ POST, Inductive reasoning post-test; RP_ POST, Practical reasoning post-test; TD_ POST, Decision making post-test; SP_ POST, Problem solving post-test; Min, minimum, Max, maximum, Asym, asymmetry; and Kurt, kurtosis.

Description of metacognition measurement (MAI).

Variables Min.Max.Media Asym.Kurt.K-S
p-sig (exact)
TOT_MAI_PRE89145233192.1316.636−0.0710.2750.557
Decla_PRE89223730.583.391−0.594−0.1520.055
Proce_PRE8991914.522.018−0.5600.3720.004
Condi_PRE8982318.043.003−0.7750.8530.013
CONO_PRE89447763.156.343−0.3840.0440.445
Plani_PRE89103124.354.073−0.8270.9880.008
Orga_PRE89264838.204.085−0.3070.3310.022
Moni_PRE89153525.243.760−0.4360.1900.005
Depu_PRE89142520.712.144−0.5090.3100.004
Eva_PRE89122820.493.310−0.178−0.0440.176
REGU_PRE8997160128.9912.489−0.0700.0430.780
OT_MAI_POST89138250197.6517.276−0.1790.9690.495
Decla_POST89233931.213.492−0.4070.3050.020
Proce_POST8982015.242.116−0.7230.8820.001
Condi_POST8902418.852.874−0.7430.4900.029
CONO_ POST89448265.306.639−0.6101.0140.153
Plani_ POST89123325.513.659−0.5390.9940.107
Orga_ POST89274839.404.150−0.4110.0530.325
Moni_ POST89173526.443.296−0.2770.4210.143
Depu_ POST89152420.402.245−0.214−0.5310.023
Eva_ POST89122920.603.680−0.083−0.0980.121
REGU_PRE8994168132.3512.973−0.2270.1650.397

TOT_MAI_PRE, MAI pre-test; Decla_PRE, Declarative pre-test; Proce_PRE, Procedural pre-test; Condi_PRE, Conditional pre-test; CONO_PRE, Knowledge pre-test; Plani_PRE, Planning pre-test; Orga_PRE, Organization pre-test; Moni_PRE, Monitoring pre-test; Depu_PRE, Troubleshooting pre-test; Eva_PRE, Evaluation pre-test; REGU_PRE, Regulation pre-test; TOT_MAI_POST, MAI post-test; Decla_ POST, Declarative post-test; Proce_ POST, Procedural post-test; Condi_ POST, Conditional post-test; CONO_ POST, Knowledge post-test; Plani_ POST, Planning post-test; Orga_POST, Organization post-test; Moni_ POST, Monitoring post-test; Depu_ POST, Troubleshooting post-test; Eva_ POST, Evaluation post-test; and REGU_ POST, Regulation post-test;

As we see in the description of all study variables, the evidence is that the majority of them adequately fit the normal model, although some present significant deviations which can be explained by sample size.

Next, to verify whether there were significant differences in the metacognition variable based on measurements before and after the intervention, we contrasted medians for samples related with Student’s t -test (see Table 3 ).

Comparison of the METAKNOWLEDGE variable as a function of PRE-POST measurements.

Variables Mean Difference (CI 95%) valuegl.p-sig. (bilateral)
TOT_MAIPre.89192.1316.636−8.152_−2.882−4.161880.000
Post.89197.6517.276
DeclaPre.8930.583.391−1.235_−0.023−2.063880.042
Post.8931.213.492
ProcePre.8914.522.018−1.210_−0.228−2.911880.005
Post.8915.242.116
Condi.Pre.8918.043.003−1.416_−0.202−2.65880.010
Post.8918.852.874
CONOPre.8963.156.343−3.289_−1.025−3.787880.000
Post.8965.36.639
PlanPre.8924.354.073−1.742_−0.573−3.934880.000
Post.8925.513.659
OrgaPre.8938.24.085−2.054_−0.350−2.803880.006
Post.8939.44.15
MoniPre.8925.243.76−1.924_−0.480−3.308880.001
Post.8926.443.296
TSPre.8920.712.144−0.159_−0.7661.303880.196
Post.8920.42.245
EvalPre.8920.493.31−0.815_−0.613−0.282880.779
Post.8920.63.68
REGUPre.89128.9912.489−5.364_−1.356−3.331880.001
Post.89132.3512.973

The results show that there are significant differences in the metaknowledge scale total and in most of its dimensions, where all the post medians for both the scale overall and for the three dimensions of the knowledge factor (declarative, procedural, and conditional) are higher than the pre-medians. However, in the cognition regulation dimension, there are only significant differences in the total and in the planning, organization, and monitoring dimensions. The medians are also greater in the post-test than the pre-test. However, the troubleshooting and evaluation dimensions do not differ significantly after intervention.

Finally, for critical thinking skills, the results show significant differences in the scale total and in the five factors regarding the measurement time, where performance medians rise after intervention (see Table 4 ).

Comparison of the CRITICAL THINKING variable as a function of PRE-POST measurements.

VariablesNMSDStudent’s -test
Mean difference (CI 95%) valuegl.p-sig. (bilateral)
TOTPre.8925.1465.436−8.720_−6.246−12.023880.000
Post.8932.6295.763
RDPre.892.9783.391−2.298_−1.364−7.794880.000
Post.894.8093.492
RIPre.894.2131.627−1.608_−0.706−5.097880.000
Post.895.3711.547
RPPre.8918.042.248−1.416_−0.202−10.027880.000
Post.8918.852.295
TDPre.8963.151.796−3.083_−2.063−6.54880.000
Post.8965.31.748
SPPre.8924.352.058−1.135_−0.213−2.906880.005
Post.8925.511.812

These results show how metacognition improves due to CT intervention, as well as how critical thinking also improves with metacognitive intervention and CT skills intervention. Thus, it improves how people think about thinking as well as about the results achieved, since metacognition supports decision-making and final evaluation about proper strategies to solve problems.

Discussion and Conclusions

The general aim of our study was to know whether a critical thinking intervention program can also influence metacognitive processes. We know that our teaching methodology improves cross-sectional skills in argumentation, explanation, decision-making, and problem-solving, but we do not know if this intervention also directly or indirectly influences metacognition. In our study, we sought to shed light on this little-known point. If we bear in mind the centrality of how we think about thinking for our cognitive machinery to function properly and reach the best results possible in the problems we face, it is hard to understand the lack of attention given to this theme in other research. Our study aimed to remedy this deficiency somewhat.

As said in the introduction, metacognition has to do with consciousness, planning, and regulation of our activities. These mechanisms, as understood by many authors, have a blended cognitive and non-cognitive nature, which is a conceptual imprecision; what is known, though, is the enormous influence they exert on fundamental thinking processes. However, there is a large knowledge gap about the factors which make metacognition itself improve. This second research lacuna is what we have partly aimed to shrink here as well with this study. Our guide has been the idea of knowing how to improve metacognition from a teaching initiative and from the improvement of fundamental critical thinking skills.

Our study has shed light in both directions, albeit in a modest way, since its design does not allow us to unequivocally discern some of the results obtained. However, we believe that the data provide relevant information to know more about existing relations between skills and metacognition, something which has seen little contrast. These results allow us to better describe these relations, guiding the design of future studies which can better discern their roles. Our data have shown that this relation is bidirectional, so that metacognition improves thinking skills and vice versa. It remains to establish a sequence of independent factors to avoid this confusion, something which the present study has aided with to be able to design future research in this area.

As the results show, total differences in almost all metaknowledge dimensions are higher after intervention; specifically, we see how in the knowledge factor the declarative, procedural, and conditional dimensions improve in post-measurements. This improvement moves in the direction we predicted. However, the cognitive regulation dimension only shows differences in the total, and in the planning, organization, and regulation dimensions. We can see how the declarative knowledge dimensions are more sensitive than the procedural ones to change, and within the latter, the dimensions over which we have more control are also more sensitive. With troubleshooting and evaluation, no changes are seen after intervention. We may interpret this lack of effects as being due to how everything referring to evaluating results is highly determined by calibration capacity, which is influenced by personality factors not considered in our study. Regarding critical thinking, we found differences in all its dimensions, with higher scores following intervention. We can tentatively state that this improved performance can be influenced not only by interventions, but also by the metacognitive improvement observed, although our study was incapable of separating these two factors, and merely established their relation.

As we know, when people think about thinking they can always increase their critical thinking performance. Being conscious of the mechanisms used in problem-solving and decision-making always contributes to improving their execution. However, we need to go into other topics to identify the specific determinants of these effects. Does performance improve because skills are metacognitively benefited? If so, how? Is it only the levels of consciousness which aid in regulating and planning execution, or do other factors also have to participate? What level of thinking skills can be beneficial for metacognition? At what skill level does this metacognitive change happen? And finally, we know that teaching is always metacognitive to the extent that it helps us know how to proceed with sufficient clarity, but does performance level modify consciousness or regulation level of our action? Do bad results paralyze metacognitive activity while good ones stimulate it? Ultimately, all of these open questions are the future implications which our current study has suggested. We believe them to be exciting and necessary challenges, which must be faced sooner rather than later. Finally, we cannot forget the implications derived from specific metacognitive instruction, as presented at the start of this study. An intervention of this type should also help us partially answer the aforementioned questions, as we cannot obviate what can be modified or changed by direct metacognition instruction.

Data Availability Statement

Ethics statement.

Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The patients/participants provided their written informed consent to participate in this study.

Author Contributions

SR and CS contributed to the conception and design of the study. SR organized the database, performed the statistical analysis, and wrote the first draft of the manuscript. SR, CS, and CO wrote sections of the manuscript. All authors contributed to the article and approved the submitted version.

This study was partly financed by the Project FONDECYT no. 11220056 ANID-Chile.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

  • Akama K. (2006). Relations among self-efficacy, goal setting, and metacognitive experiences in problem solving . Psychol. Rep. 98 , 895–907. doi: 10.2466/pr0.98.3.895-907, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Antonietti A., Ignazi S., Perego P. (2000). Metacognitive knowledge about problem solving methods . Br. J. Educ. Psychol. 70 , 1–16. doi: 10.1348/000709900157921, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • APA (1990). Critical thinking: A statement of expert consensus for purposes of educational assessment and instruction. Executive Summary “The Delphi Report.”
  • Arslan S. (2014). An investigation of the relationships between metacognition and self-regulation with structural equation . Int. Online J. Educ. Sci. 6 , 603–611. doi: 10.15345/IOJES.2014.03.009 [ CrossRef ] [ Google Scholar ]
  • Ávila M., Bianchetti M., González A. (2017). Uso del método “Think Aloud” en la investigación cualitativa . Pistas Educ. 39 , 26–38. [ Google Scholar ]
  • Azevedo R. (2020). Reflections on the field of metacognition: issues, challenges, and opportunities . Metacogn. Learn. 15 , 91–98. doi: 10.1007/s11409-020-09231-x [ CrossRef ] [ Google Scholar ]
  • Berardi-Coletta B., Buyer L. S., Dominowski R. L., Rellinger E. R. (1995). Metacognition and problem solving: a process-oriented approach . J. Consult. Clin. Psychol. 21 , 205–223. [ Google Scholar ]
  • Black S. (2005). Teaching students to think critically . Educ. Digest 70 , 42–47. [ Google Scholar ]
  • Choy S. C., Cheah P. K. (2009). Teacher perceptions of critical thinking among students and its influence on higher education . Int. J. Teach. Learn. Higher Educ. 20 , 198–206. [ Google Scholar ]
  • Coutinho S., Wiemer-Hastings K., Skowronski J. J., Britt M. A. (2005). Metacognition, need for cognition and use of explanations during ongoing learning and problem solving . Learn. Individ. Differ. 15 , 321–337. doi: 10.1016/j.lindif.2005.06.001 [ CrossRef ] [ Google Scholar ]
  • Dahik S., Cáneppa C., Dahik C., Feijoò K. (2019). Estrategias de Think-Aloud para mejorar la habilidad de lectura en estudiantes en el centro de idiomas en la universidad técnica de Babahoyo . Rev. Magaz. Ciencias 4 , 65–83. doi: 10.5281/zenodo.3239552 [ CrossRef ] [ Google Scholar ]
  • Deci E. L., Ryan R. M. (1985). The general causality orientations scale: self-determination in personality . J. Res. Pers. 19 , 109–134. doi: 10.1016/0092-6566(85)90023-6, PMID: [ CrossRef ] [ Google Scholar ]
  • Doganay A., Demir O. (2011). Comparison of the level of using metacognitive strategies during study between high achieving and low achieving prospective teachers . Educ. Sci. Theor. Pract. 11 , 2036–2043. [ Google Scholar ]
  • Drigas A., Mitsea E. (2020). The 8 pillars of metacognition . Int. J. Emerg. Technol. Learn. 15 , 162–178. doi: 10.3991/ijet.v15i21.14907 [ CrossRef ] [ Google Scholar ]
  • Drigas A., Mitsea E. (2021). 8 pillars X 8 layers model of metacognition: educational strategies, exercises and trainings . Int. J. Online Biomed. Eng. 17 , 115–134. doi: 10.3991/ijoe.v17i08.23563 [ CrossRef ] [ Google Scholar ]
  • Ennis R. H. (1987). “ A taxonomy of critical thinking dispositions and abilities ,” in Teaching Thinking Skills. eds. Baron J. B., Sternberg R. J. (New York: Freeman and Company; ), 9–26. [ Google Scholar ]
  • Ennis R. H. (1996). Critical Thinking. Upper Saddle River, NJ: Prentice-Hall [ Google Scholar ]
  • Facione P. A. (1990). Critical Thinking: A Statement of expert consensus for Purposes of Educational Assessment and Instruction—Executive Summary of the delphi Report. Millbrae: California Academic Press [ Google Scholar ]
  • Facione P. A. (2011). Think Critically. New York: Prentice-Hall. [ Google Scholar ]
  • Flavell J. H. (1976). “ Metacognitive aspects of problem solving ,” in The Nature of Intelligence. ed. Resnik L. B. (Hillsdale, N.J: Erlbaum; ), 231–235. [ Google Scholar ]
  • Flavell J. H. (2004). Theory of the mind development: retrospect and prospect . Merrill-Palmer Q. 50 , 274–290. doi: 10.1353/mpq.2004.0018 [ CrossRef ] [ Google Scholar ]
  • García T., Rodríguez C., González-Castro P., Álvarez-García D., González-Pienda J.-A. (2016). Metacognición y funcionamiento ejecutivo en Educación Primaria [Metacognition and executive functioning in Elementary School] . Ann. Psychol. 32 , 474–483. doi: 10.6018/analesps.32.2.202891 [ CrossRef ] [ Google Scholar ]
  • Glaser R. (1994). “ Learning theory and instruction ,” in International Perspectives on Psychological Science. Vol. 2 . eds. D’Ydewalle G., Eelen P., Bertelson B. (NJ: Erlbaum; ) [ Google Scholar ]
  • Halpern D. (1998). Teaching critical thinking for transfer across domains—dispositions, skills, structure training, and metacognitive monitoring . Am. Psychol. 53 , 449–455. doi: 10.1037/0003-066X.53.4.449, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Halpern D. (2003). Halpern Critical Thinking Assessment Using Everyday Situations: Background and Scoring Standards. Claremont, CA: Claremont McKenna College. [ Google Scholar ]
  • Halpern D. F., Dunn D. S. (2021). Critical thinking: A model of intelligence for solving real-world problems . J. Intellig. 9 :22. doi: 10.3390/jintelligence9020022, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Halpern D. F., Dunn D. S. (2022). Thought and Knowledge. An Introduction to Critical-Thinking . 6th Edn. New York: Taylor and Francis. [ Google Scholar ]
  • Huertas Bustos A. P., Vesga Bravo G. J., Gilando León M. (2014). Validación del instrumento “Inventario de Habilidades Metacognitivas (MAI)” con estudiantes colombianos . Praxis Saber 5 , 55–74. doi: 10.19053/22160159.3022 [ CrossRef ] [ Google Scholar ]
  • Jaramillo S., Osses S. (2012). Validación de un Instrumento sobre Metacognición para Estudiantes de Segundo Ciclo de Educación General Básica . Estud. Pedag. 38 , 117–131. doi: 10.4067/S0718-07052012000200008 [ CrossRef ] [ Google Scholar ]
  • Johnson R. H. (2008). “Critical thinking, logic and argumentation,” in Paper presented at the Conferencia Internacional: Lógica, Argumentación y Pensamiento Crítico. Santiago de Chile. January 8–11. [ Google Scholar ]
  • Klimenko O., Alvares J. L. (2009). Aprender cómo aprendo: la enseñanza de estrategias metacognitivas . Educ. Educ. 12 , 11–28. [ Google Scholar ]
  • Kramarski B., Mevarceh Z. R., Arami M. (2002). The effect of metacognitive instruction on solving mathematical authentic tasks . Educ. Stud. Math. 49 , 225–250. doi: 10.1023/A:1016282811724 [ CrossRef ] [ Google Scholar ]
  • Kuhn D., Dean D. (2004). Metacognition: a bridge between cognitive psychology and educational practice . Theory Pract. 43 , 268–274. doi: 10.1207/s15430421tip4304_4 [ CrossRef ] [ Google Scholar ]
  • Larkin S. (2009). Metacognition in Young Children. New York, NY: Routledge [ Google Scholar ]
  • Lipman M. (2003). Thinking in Education (2nd Edn.). Cambridge, MA: Cambridge University Press [ Google Scholar ]
  • Magno C. (2010). The role of metacognitive skills in developing critical thinking . Metacogn. Learn. 5 , 137–156. doi: 10.1007/s11409-010-9054-4, PMID: [ CrossRef ] [ Google Scholar ]
  • Mateos M. (2001). Metacognición y Educación. Buenos Aires: Aique [ Google Scholar ]
  • Mayor J., Suengas A., González Marqués J. (1993). Estrategias Metacognitivas. Aprender a Aprendery Aprender a Pensar. Madrid: Síntesis [ Google Scholar ]
  • Orion N., Kali Y. (2005). The effect of an earth-science learning program on students’ scientific thinking skills . J. Geosci. Educ. 53 , 387–394. doi: 10.5408/1089-9995-53.4.387 [ CrossRef ] [ Google Scholar ]
  • Ossa C., Rivas S.F., Saiz C. (2016). Estrategias metacognitivas en el desarrollo del análisis argumentativo En IV Seminário Internacional Cognição, aprendizagem e desempenho. eds. Casanova J., Bisinoto C., Almeida L. (Braga: Livro de atas; ), 30–47. [ Google Scholar ]
  • Özsoy G. (2011). An investigation of the relationship between metacognition and mathematics achievement . Asia Pac. Educ. Rev. 12 , 227–235. doi: 10.1007/s12564-010-9129-6 [ CrossRef ] [ Google Scholar ]
  • Paul R., Elder L. (2001). Critical Thinking Handbook: Basic Theory and Instructional Structures. Dillon Beach, CA: Foundation for Critical Thinking [ Google Scholar ]
  • Paul R., Elder A. D. (2006). Critical Thinking. Learn the Tools the Best Thinkers Use. Upper Saddle River, NJ: Pearson/Prentice Hall [ Google Scholar ]
  • Rivas S. F., Saiz C. (2012). Validación y propiedades psicométricas de la prueba de pensamiento crítico PENCRISAL . Rev. Electrón. Metodol. Aplic. 17 , 18–34. [ Google Scholar ]
  • Rivas S. F., Saiz C., Ossa C. (2017). “ Desarrollo de las estrategias metacognitivas mediante el programa de instrucción en pensamiento crítico ARDESOS .” in II Seminario Internacional de 660 Pensamiento Crítico. Manizales (Colombia). de octubre de 11–13, 2017. [ Google Scholar ]
  • Ryan R. M., Deci E. L. (2000). Intrinsic and extrinsic motivations: classic definitions and new directions . Contemp. Educ. Psychol. 21 , 54–67. doi: 10.1006/ceps.1999.1020, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Saiz C. (2017). Pensamiento Crítico y Cambio. Madrid: Pirámide. [ Google Scholar ]
  • Saiz C. (2020). Pensamiento Crítico y Eficacia. 2ª Edn. Madrid: Pirámide. [ Google Scholar ]
  • Saiz C., Rivas S. F. (2008). Evaluación en pensamiento crítico: una propuesta para diferenciar formasde pensar . Ergo. Nueva Época 22-23 , 25–66. [ Google Scholar ]
  • Saiz C., Rivas S. F. (2011). Evaluation of the ARDESOS program: an initiative to improve criticalthinking skills . J. Scholar. Teach. Learn. 11 , 34–51. [ Google Scholar ]
  • Saiz C., Rivas S. F. (2012). Pensamiento crítico y aprendizaje basado en problemas . Rev. Docenc. Univ. 10 , 325–346. doi: 10.4995/redu.2012.6026 [ CrossRef ] [ Google Scholar ]
  • Saiz C., Rivas S. F. (2016). New teaching techniques to improve critical thinking . DIAPROVE Methodol. 40 , 3–36. [ Google Scholar ]
  • Sánchez-Castaño J. A., Castaño-Mejía O. Y., Tamayo-Alzate O. E. (2015). La argumentación metacognitiva en el aula de ciencias . Rev. Latin. Cienc. Soc. 13 , 1153–1168. doi: 10.11600/1692715x.13242110214 [ CrossRef ] [ Google Scholar ]
  • Sastre-Riba S. (2012). Alta capacidad intelectual: perfeccionismo y regulación metacognitiva . Rev. Neurol. 54 , S21–S29. doi: 10.33588/rn.54S01.2012011, PMID: [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sattin D., Magnani F. G., Bartesaghi L., Caputo M., Fittipaldo A. V., Cacciatore M., et al.. (2021). Theoretical models of consciousness: a scoping review . Brain Sci. 11 :535. doi: 10.3390/brainsci11050535, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Schraw G. (1998). Promoting general metacognitive awareness . Instr. Sci. 26 , 113–125. doi: 10.1023/A:1003044231033, PMID: [ CrossRef ] [ Google Scholar ]
  • Schraw G., Dennison R. (1994). Assessing metacognitive awareness . Contemp. Educ. Psychol. 19 , 460–475. doi: 10.1006/ceps.1994.1033, PMID: [ CrossRef ] [ Google Scholar ]
  • Schroyens W. (2005). Knowledge and thought: an introduction to critical thinking . Exp. Psychol. 52 , 163–164. doi: 10.1027/1618-3169.52.2.163, PMID: [ CrossRef ] [ Google Scholar ]
  • Shekhar M., Rahnev D. (2021). Sources of metacognitive inefficiency . Trends Cogn. Sci. 25 , 12–23. doi: 10.1016/j.tics.2020.10.007, PMID: [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tamayo-Alzate O., Cadavid-Alzate V., Montoya-Londoño D. (2019). Análisis metacognitivo en estudiantes de básica, durante la resolución de dos situaciones experimentales en la clase de Ciencias Naturales . Rev. Colomb. Educ. 76 , 117–141. doi: 10.17227/rce.num76-4188 [ CrossRef ] [ Google Scholar ]
  • Ugartetxea J. (2001). Motivación y metacognición, más que una relación . Relieve 7 :4442. doi: 10.7203/relieve.7.2.4442 [ CrossRef ] [ Google Scholar ]
  • Urdaneta M. (2014). Diálogo para la reflexión: compartiendo la experiencia de aula desde el proyecto pedagógico . Innov. Educ. 16 , 43–49. doi: 10.22458/ie.v16i21.902 [ CrossRef ] [ Google Scholar ]
  • Van der Stel M., Veenman M. V. J. (2010). Development of metacognitive skillfulness: a longitudinal study . Learn. Individ. Differ. 20 , 220–224. doi: 10.1016/j.lindif.2009.11.005 [ CrossRef ] [ Google Scholar ]
  • Conference Coverage
  • CRO/Sponsor
  • 2023 Salary Survey

critical thinking in monitoring and evaluation

  • Publications
  • Conferences

Developing Critical Thinking Within Centralized Monitoring Teams

Developing a higher level of critical thinking can create a comprehensive risk story and properly direct mitigations throughout your organization.

Since the FDA published its report, “Guidance for Industry: Oversight of Clinical Investigations—A Risk-Based Approach to Monitoring” in 2011, risk-based monitoring (RBM) has become a guidance document that lays out the FDA’s expectations. Later on, the European Medicines Agency (EMA) adopted ICH E6 (R2) and made it a requirement.

Although robust centralized monitoring (CM) ensures risk-based quality monitoring (RBQM) success, regulatory agencies often provide a general framework to this process without specifying exactly how it should be done. Due to the complexity and a lack of guidance within RBM, it’s crucial that CM teams develop critical-thinking skills to effectively run these operations. Critical thinking helps CM teams understand root causes, reduce the rate of false positive signals, determine next steps in an investigation and recommend risk mitigations.

While plenty of skill-based CM trainings exist for clinical trials, Good Clinical Practice (GCP), regulation and data management, critical-thinking education has yet to arrive and require more complex approach involving variety of formats and activities. Through our research, we found that the ability to develop critical thinking is not only possible but can help pharma companies in being more efficient in their overall RBM strategy, from more accurate decoding of the risk signals and mitigation actions up to the continuous improvements, and improved retention ability.

What is critical thinking?

Critical thinking has many definitions and interpretations, which makes it hard to grasp. Benjamin Bloom’s taxonomy model offers a practical approach to understanding where and how critical thinking occurs. The model provides a hierarchy for achieving a higher order of thinking through knowledge, comprehension, application, analysis, synthesis and evaluation.

critical thinking in monitoring and evaluation

What Can ClinOps Learn from Pre-Clinical?

Dr. Hanne Bak, Senior Vice President of Preclinical Manufacturing and Process Development at Regeneron speaks about her role at the company as well as their work with monoclonal antibodies, the regulatory side of manufacturing, and more.

© Murrstock - © Murrstock - stock.adobe.com.

FDA and CluePoints Extend Existing Collaboration to Include AI/ML for Quality Assessment

Under the new extension, FDA and CluePoints will enhance CluePoints’ SMART software to address a broader range of regulatory concerns.

© ipopba - © ipopba - stock.adobe.com.

Innovating Beyond the Lab: The Critical Role of Contracting in Research

Biotechs can successfully overcome bottlenecks in time-to-market for new drugs by embracing contracting innovations with the same passion applied to research breakthroughs.

Image credit: Jafree | stock.adobe.com

Challenges in Conducting Multicentric Infectious Giardia Diarrhea Clinical Studies in India

Multicentric studies on diarrhea are essential for comprehending its epidemiology, causes, and treatment outcomes.

Tips to Rescue a Clinical Trial Before It’s Too Late

Tips to Rescue a Clinical Trial Before It’s Too Late

Why constant communication and transparency are paramount to successful partnerships between pharmaceutical companies and CROs

2 Commerce Drive Cranbury, NJ 08512

609-716-7777

critical thinking in monitoring and evaluation

Site logo

  • Introduction to Monitoring and Evaluation: The Basics
  • Learning Center

Introduction to Monitoring and Evaluation The Basics

Looking for an introduction to monitoring and evaluation (M&E)? This guide covers the basics of M&E, including key concepts, definitions, steps for designing a plan, and tools and methods for data collection and analysis. Learn how to interpret and report M&E findings, and how to utilize M&E results for program improvement and decision-making. Whether you’re new to M&E or looking to refresh your knowledge, this guide is a valuable resource for anyone involved in program evaluation and performance measurement.

Table of Contents

  • Understanding the Importance of Monitoring and Evaluation
  • Concepts and Definitions in Monitoring and Evaluation
  • Designing a Monitoring and Evaluation Plan: Steps and Strategies
  • Tools and Methods for Data Collection and Analysis in Monitoring and Evaluation
  • Making Sense of Data: Interpreting and Reporting Findings
  • Utilizing Monitoring and Evaluation Results for Program Improvement and Decision-Making
  • Case Studies

Enhance your resume to secure more job interviews: Try Our FREE Resume Scanner!

Optimize your resume for ATS with formatting, keywords, and quantified experience.

  • Compare job description keywords with your resume.
  • Tailor your resume to match as many keywords as possible.
  • Grab the recruiter’s attention with a standout resume.

Resume Scanner Dashboard

▶️Understanding the Importance of Monitoring and Evaluation

Monitoring and evaluation (M&E) is a critical process for assessing the performance and effectiveness of programs, projects, and policies. This process involves collecting and analyzing data on program activities, outputs, outcomes, and impact to determine whether the desired results have been achieved.

In today’s complex and dynamic development landscape, the importance of Monitoring and Evaluation (M&E) is more crucial than ever. It allows organizations to measure progress towards their goals, identify areas of improvement, and make evidence-based decisions to improve program outcomes.

Monitoring and evaluation (M&E) can also provide valuable information for accountability and transparency. Donors, funders, and other stakeholders expect organizations to be accountable for the resources they receive and demonstrate the impact of their interventions. M&E helps organizations demonstrate the effectiveness of their programs, build trust with stakeholders , and secure future funding.

M&E is essential for ensuring that programs are effective, efficient, and accountable. By monitoring and evaluating program performance, organizations can identify successes and challenges and make informed decisions to improve program outcomes and impact.

▶️Key Concepts and Definitions in Monitoring and Evaluation

To understand monitoring and evaluation (M&E) effectively, it is essential to be familiar with some of the key concepts and definitions used in this field. Here are some of the essential terms:

  • Indicator: An indicator is a variable that can be measured to determine progress towards achieving a goal or objective. Indicators can be qualitative or quantitative.
  • Outputs: Outputs are the direct products or services delivered by a program or intervention.
  • Outcome : Outcomes refer to the changes that occur as a result of the program or intervention. These changes are often related to the program’s objectives or goals.
  • Impact : Impact refers to the long-term effects or broader changes that occur as a result of a program or intervention. Impact can be challenging to measure and may take years to manifest.
  • Baseline : Baseline data is the information collected at the start of a program or intervention against which progress can be measured.
  • Monitoring : Monitoring is the systematic and continuous process of collecting data on program activities, outputs, and outcomes to track progress towards achieving program goals.
  • Evaluation : Evaluation is the process of assessing the effectiveness, efficiency, relevance, sustainability, and impact of a program or intervention. It involves collecting and analyzing data to determine the program’s success or failure.
  • Logic Model : A logic model is a visual representation of a program’s theory of change. It describes the program’s inputs, activities, outputs, outcomes, and impact.
  • Performance Indicator : A performance indicator is a specific and measurable variable used to assess program performance.
  • Data Quality : Data quality refers to the accuracy, completeness, and reliability of data. High-quality data is critical for making informed decisions based on evidence.

Understanding these key concepts and definitions is crucial for developing effective M&E plans and implementing successful programs.

Catch HR’s Eye Instantly:

  • Resume Review
  • Resume Writing
  • Resume Optimization

Premier global development resume service since 2012

Stand Out with a Pro Resume

▶️Designing a Monitoring and Evaluation Plan: Steps and Strategies

Designing a monitoring and evaluation (M&E) plan involves several steps and strategies to ensure that the plan is effective in measuring program performance, identifying areas for improvement, and making evidence-based decisions. Here are some of the key steps and strategies:

  • Define Program Goals and Objectives : The first step is to clearly define program goals and objectives. This will provide a clear direction for the selection of appropriate indicators, data collection methods, and analysis techniques.
  • Identify Key Performance Indicators (KPIs) : KPIs are the specific variables that will be used to measure program performance. It is essential to identify KPIs that are measurable, relevant, and aligned with program goals and objectives.
  • Determine Data Collection Methods : There are several methods for collecting data, including surveys, interviews, focus groups, observations, and document reviews. The choice of data collection method will depend on the type of data needed, available resources, and the target population.
  • Develop Data Collection Tools : Once data collection methods have been identified, it is necessary to develop data collection tools , such as survey questionnaires, interview protocols, or observation checklists. These tools should be pre-tested to ensure they are valid and reliable.
  • Determine Data Analysis Methods : There are different methods for analyzing data , such as descriptive statistics, inferential statistics, and qualitative analysis. The choice of analysis method will depend on the type of data collected and research questions.
  • Develop a Data Management Plan: Data management plan involves planning for the storage, organization, and analysis of data. The plan should include procedures for data entry, cleaning, storage, and backup.
  • Develop a Reporting Plan : Reporting is an essential component of M&E. The reporting plan should specify the type of report, the audience, and the frequency of reporting. The structure of Evaluation reports should be clear, concise, and actionable.
  • Ensure Ethical Considerations : Ethical considerations are critical when designing and conducting M&E. It is essential to obtain informed consent from participants, maintain confidentiality, and ensure data security.
  • Establish a Monitoring and Evaluation Schedule: A monitoring and evaluation schedule outlines the timeline for data collection, analysis, and reporting. This schedule should be realistic and consider the availability of resources .
  • Budget for Monitoring and Evaluation : M&E requires resources, including personnel, equipment, and software. It is essential to budget for these resources to ensure that the M&E plan can be effectively implemented.

By following these steps and strategies, organizations can develop a comprehensive M&E plan that will help them measure program performance, identify areas of improvement, and make informed decisions.

▶️Tools and Methods for Data Collection and Analysis in Monitoring and Evaluation

In monitoring and evaluation (M&E), data collection and analysis are critical components that help organizations measure program performance, identify areas for improvement, and make evidence-based decisions. Here are some of the tools and methods commonly used for data collection and analysis in M&E:

Data Collection Tools:

  • Surveys: Surveys are a common tool for collecting data. They can be administered in person, by phone, or online, and can be used to collect both quantitative and qualitative data.
  • Interviews: Interviews can be used to collect in-depth qualitative data. They can be conducted in person, by phone, or online, and can be structured, semi-structured, or unstructured.
  • Focus Groups: Focus groups involve a group of individuals who share similar characteristics or experiences. They are led by a moderator and are designed to elicit qualitative data through group discussion.
  • Observations: Observations involve systematically observing program activities to collect data. Observations can be conducted in person or remotely.
  • Document Review: Document review involves analyzing program documents, such as program reports, progress reports, and financial reports, to collect data.

Data Analysis Methods:

  • Descriptive Statistics: Descriptive statistics are used to summarize and describe the characteristics of a dataset. This includes measures such as mean, median, mode, and standard deviation.
  • Inferential Statistics: Inferential statistics are used to make inferences about a population based on a sample. This includes techniques such as hypothesis testing and regression analysis.
  • Qualitative Analysis: Qualitative analysis involves analyzing qualitative data, such as interview transcripts, to identify patterns, themes, and trends.
  • Data Visualization: Data visualization involves presenting data in a visual format, such as charts, graphs, and maps. This can help to identify patterns and trends in data.
  • Geographic Information Systems (GIS): GIS involves analyzing and presenting data in a spatial format. This can help to identify patterns and relationships in data that may not be apparent in traditional data analysis methods.

By using these tools and methods for data collection and analysis, organizations can gain insights into program performance and make evidence-based decisions to improve program outcomes. It is essential to select the appropriate tools and methods based on the type of data being collected and the research questions.

▶️Making Sense of Data: Interpreting and Reporting Findings

Once data has been collected and analyzed in monitoring and evaluation (M&E), it is necessary to interpret and report the findings. This process involves making sense of the data and presenting it in a way that is clear, concise, and actionable. Here are some key considerations when interpreting and reporting M&E findings:

  • Identify Key Findings: The first step is to identify the key findings from the data analysis. These should be the most significant and relevant results that answer the research questions.
  • Analyze Findings in Context: It is essential to analyze the findings in the context of the program goals and objectives, as well as the broader context of the environment in which the program operates. This can help to identify the implications of the findings for program implementation and improvement.
  • Use Data Visualization: Data visualization can help to communicate findings in a clear and concise manner. This includes charts, graphs, and other visual representations of the data that can help to highlight patterns and trends.
  • Interpret Findings: Once the key findings have been identified, it is necessary to interpret them. This involves explaining the meaning and significance of the findings, as well as any limitations or caveats.
  • Provide Recommendations: M&E findings should lead to recommendations for program improvement. These recommendations should be based on the data analysis and should be feasible and actionable.
  • Report Findings: Finally, the findings should be reported in a clear, concise, and accessible manner. The report should be tailored to the intended audience, such as program staff, donors, or other stakeholders.

Reporting M&E findings is an essential component of the M&E process, as it helps to communicate program performance and guide program improvement. By following these key considerations, organizations can ensure that M&E findings are meaningful, relevant, and actionable.

▶️Utilizing Monitoring and Evaluation Results for Program Improvement and Decision-Making

The purpose of monitoring and evaluation (M&E) is to collect and analyze data on program performance, and to use this information to make evidence-based decisions and improve program outcomes. Here are some ways in which M&E results can be utilized for program improvement and decision-making:

  • Identify Strengths and Weaknesses: M&E results can help to identify the strengths and weaknesses of a program. By identifying the areas where a program is performing well and those that need improvement, program staff can make targeted interventions to improve program outcomes.
  • Make Data-Informed Decisions: M&E results provide data that can be used to make evidence-based decisions. This can include decisions about program implementation, resource allocation, and strategic planning.
  • Adjust Program Strategies: M&E results can inform adjustments to program strategies. If a program is not achieving its intended outcomes, program staff can use M&E results to identify the areas where changes need to be made.
  • Share Results with Stakeholders: Sharing M&E results with stakeholders, such as program staff, funders, and beneficiaries, can help to build buy-in and support for program improvement efforts.
  • Build Accountability: M&E results can help to build accountability for program outcomes. By tracking program performance over time, program staff can demonstrate progress toward program goals and justify resource allocation decisions.
  • Learn and Adapt: M&E results can provide a learning opportunity for program staff. By analyzing the data and identifying what worked and what did not work, program staff can adapt program strategies and make improvements over time.

By utilizing M&E results for program improvement and decision-making, organizations can improve program outcomes and ensure that resources are being used effectively. It is essential to make M&E an integral part of program design and implementation to ensure that data is being collected and analyzed on an ongoing basis.

▶️ Case Studies

These case studies provide a detailed perspective based on evaluation for each context, emphasizing the specific areas of impact and improvement addressed through Monitoring and Evaluation (M&E) efforts.

Case Study 1: Health Clinic Efficiency

Basis of Evaluation: Efficiency and Service Delivery Improvement

Context: In a rural region with limited access to healthcare, a nonprofit organization aimed to improve the efficiency of a local health clinic. The clinic served a large population, but long wait times and resource allocation issues were prevalent.

M&E Approach: The evaluation focused on evaluating the clinic’s efficiency and its impact on service delivery. Key evaluation components included:

  • Time-Motion Studies: To assess clinic workflow, identify bottlenecks, and reduce patient waiting times.
  • Patient Satisfaction Surveys: To gather feedback from patients regarding their experiences and perceived improvements.
  • Resource Utilization Analysis: To optimize the allocation of staff and resources to meet patient needs effectively.

Outcome: M&E data revealed significant improvements in clinic efficiency, marked by a 30% reduction in patient wait times. Patient satisfaction scores increased notably, indicating improved service quality. The clinic’s ability to serve a larger number of patients also improved, showcasing the positive impact of M&E on service delivery and efficiency.

Case Study 2: Education Program Impact

Basis of Evaluation: Educational Outcome Improvement

Context: A local education authority implemented a literacy program in primary schools to enhance student performance in reading and comprehension.

M&E Approach: The evaluation was primarily concerned with assessing the program’s impact on educational outcomes. Key evaluation components included:

  • Pre- and Post-Tests: To measure changes in student reading levels before and after program implementation.
  • Classroom Observations: To assess the effectiveness of teaching methods and identify areas for improvement.
  • Teacher Surveys: To gather feedback from educators and enhance program delivery.

Outcome: M&E findings indicated a significant increase in student reading proficiency, with 80% of students demonstrating improvement. Classroom observations revealed that innovative teaching methods were effective, and teacher feedback led to further refinements in the program. This case study emphasized the role of M&E in improving educational outcomes.

Case Study 3: Environmental Conservation

Basis of Evaluation: Conservation and Biodiversity Preservation

Context: A conservation organization initiated a project to protect a threatened wildlife habitat from illegal logging and habitat degradation.

M&E Approach: The evaluation focused on assessing the impact of conservation efforts on the environment and biodiversity. Key evaluation components included:

  • Biodiversity Assessments: Using camera traps and field surveys to monitor wildlife populations.
  • Satellite Imagery: To track deforestation and habitat changes.
  • Community Engagement: To involve local residents in conservation efforts and assess their participation.

Outcome: M&E data showed a stable or increasing population of threatened species and a significant reduction in illegal logging activities. The engagement of local communities contributed to the project’s success. This case study highlighted the role of M&E in conservation and community engagement.

Case Study 4: Humanitarian Aid Distribution

Basis of Evaluation: Effective Humanitarian Assistance

Context: In the aftermath of a natural disaster, an international humanitarian organization provided emergency relief to affected communities.

M&E Approach: The evaluation aimed to assess the effectiveness of humanitarian aid distribution. Key evaluation components included:

  • Tracking Aid Distribution: To ensure efficient and targeted delivery of relief items.
  • Beneficiary Interviews: To assess needs, satisfaction, and the impact of aid on beneficiaries.
  • Shelter Quality Monitoring: To improve the quality of shelter provided to disaster survivors.

Outcome: M&E data indicated efficient and targeted aid distribution, with relief items reaching those most in need. Shelter conditions improved over time, and beneficiary feedback led to adjustments in the aid distribution process. This case study highlighted the role of M&E in ensuring the effective delivery of humanitarian assistance.

Case Study 5: Government Policy Evaluation

Basis of Evaluation: Policy Impact and Economic Growth

Context: A government implemented a policy to promote renewable energy adoption and reduce carbon emissions.

M&E Approach: The evaluation focused on assessing the impact of the policy on specific policy objectives. Key evaluation components included:

  • Monitoring Renewable Energy Production: To track progress toward renewable energy targets.
  • Emissions Assessments: To measure the reduction in carbon emissions.
  • Economic Data Analysis: To analyze job creation and economic growth in the renewable energy sector.

Outcome: M&E data demonstrated a substantial increase in renewable energy production, a decrease in carbon emissions, and positive economic impacts. This case study underscored the role of M&E in assessing policy effectiveness and its economic consequences.

▶️Conclusion: Harnessing the Power of Monitoring and Evaluation

Monitoring and Evaluation (M&E) are not mere bureaucratic procedures but powerful tools that enable organizations and governments to navigate the complex landscape of projects, programs, and policies. Through systematic data collection, analysis, and interpretation, M&E sheds light on the effectiveness and impact of initiatives across diverse sectors. In our exploration of the basics of M&E, we’ve uncovered its transformative potential through real-world case studies.

M&E as the Catalyst for Improvement

In Case Study 1, we witnessed how M&E can enhance the efficiency of healthcare delivery, reducing wait times and improving patient satisfaction. M&E serves as a catalyst for identifying bottlenecks, optimizing resource allocation, and ultimately enhancing service delivery in the healthcare sector.

Empowering Education through Data

Case Study 2 highlighted the power of M&E in the field of education. By measuring changes in student performance and assessing teaching methods, M&E helps educational authorities fine-tune programs, ultimately empowering students with improved literacy and comprehension skills.

Conservation and Community Engagement

In Case Study 3, we observed the critical role of M&E in environmental conservation. M&E enables organizations to track changes in biodiversity, identify threats like illegal logging, and engage local communities in conservation efforts. This holistic approach underscores the importance of community involvement in environmental initiatives.

Effective Humanitarian Aid

Case Study 4 demonstrated how M&E ensures the efficient distribution of humanitarian aid. By tracking aid distribution, assessing beneficiary needs, and improving shelter quality, M&E plays a pivotal role in delivering timely and targeted assistance during crises.

Informing Evidence-Based Policy

Lastly, Case Study 5 exemplified the impact of M&E on policy evaluation. By monitoring renewable energy production, emissions reductions, and economic growth, M&E supports data-driven policymaking and helps governments achieve their objectives.

A Roadmap to Success

In conclusion, Monitoring and Evaluation serve as a roadmap to success, guiding organizations and policymakers toward evidence-based decisions and meaningful improvements. Through the careful collection and analysis of data, M&E empowers stakeholders to adapt, innovate, and achieve sustainable development goals.

As you embark on your journey into the world of Monitoring and Evaluation, remember that these tools are not just a means to an end; they are the cornerstone of informed progress. By harnessing the power of M&E, we can build a future where every initiative, whether in healthcare, education, conservation, humanitarian aid, or policymaking, is driven by data, enriched by insights, and dedicated to positive change.

With the basics of M&E under your belt and a commitment to its principles, you are well-equipped to embark on the path of evidence-driven impact and make a difference in the world.

' data-src=

Sean Whiting

Useful and concise information for newbies and to recap

' data-src=

Emmanuel Anjelo Alselem Ganyi

The introduction is very clear as it summarized all what entails in M&E concept

' data-src=

Oluwatosin Ayinde

Very quick to assimilate

' data-src=

I like the content

' data-src=

Ndzi Njila Frankline

This was really marvelous, it has really be of help to me for course preparation

' data-src=

Daniel Ayogo

Glad for the information and really desirous to be part of the Evaluation community. Thank you. Daniel Ayogo – Paramedic(clinician).

' data-src=

very useful and concise information

' data-src=

Fation Luli

Hey EvalCommunity readers,

Did you know that you can enhance your visibility by actively engaging in discussions within the EvalCommunity? Every week, we highlight the most active commenters and authors in our newsletter , which is distributed to an impressive audience of over 1.289,000 monthly readers and practitioners in International Development , Monitoring and Evaluation (M&E), and related fields.

Seize this opportunity to share your invaluable insights and make a substantial contribution to our community. Begin sharing your thoughts below to establish a lasting presence and wield influence within our growing community.

' data-src=

Tsion Getnet

Can I know who the writer and year of publication is?

Leave a Comment Cancel Reply

Your email address will not be published.

How strong is my Resume?

Only 2% of resumes land interviews.

Land a better, higher-paying career

critical thinking in monitoring and evaluation

Jobs for You

College of education: open-rank, evaluation/social research methods — educational psychology.

  • Champaign, IL, USA
  • University of Illinois at Urbana-Champaign

Deputy Director – Operations and Finance

  • United States

Energy/Environment Analyst

Climate finance specialist, call for consultancy: evaluation of dfpa projects in kenya, uganda and ethiopia.

  • The Danish Family Planning Association

Project Assistant – Close Out

  • United States (Remote)

Global Technical Advisor – Information Management

  • Belfast, UK
  • Concern Worldwide

Intern- International Project and Proposal Support – ISPI

Budget and billing consultant, manager ii, budget and billing, usaid/lac office of regional sustainable development – program analyst, team leader, senior finance and administrative manager, data scientist.

  • New York, NY, USA
  • Everytown For Gun Safety

Energy Evaluation Specialist

Services you might be interested in, useful guides ....

How to Create a Strong Resume

Monitoring And Evaluation Specialist Resume

Resume Length for the International Development Sector

Types of Evaluation

Monitoring, Evaluation, Accountability, and Learning (MEAL)

LAND A JOB REFERRAL IN 2 WEEKS (NO ONLINE APPS!)

Sign Up & To Get My Free Referral Toolkit Now:

IOM is considered an efficient organization with extensive field presence, implementing its many interventions through a large and decentralized network of regional offices and country offices. 1 IOM puts a strong focus on results-based management ( RBM ), which is promoted to strengthen organizational effectiveness and move towards evidence-based and results-focused programming.

  • 1 For the purpose of the IOM Monitoring and Evaluation Guidelines, the term intervention is used interchangeably for either a project, programme, strategy or a policy

A results-based approach requires robust monitoring and evaluation ( M&E ) systems that provide government officials, IOM staff, partners, donors and civil society with better means to the following:

  • Inform decision-making by providing timely feedback to management on intervention context, risks, challenges, results, as well as successful approaches;
  • Meet accountability obligations by informing donors, beneficiaries and other stakeholders on IOM’s performance, progress made in the achievement of results and the utilization of resources; 2
  • Draw lessons learned from experience to provide feedback into the planning, design and implementation of future interventions and improve service delivery.

M&E, at times, may seem challenging in the context of IOM’s interventions, where project duration may not be “long enough” to incorporate strong M&E, or where security, time pressure, funding and/or capacity constraints may hinder the rigorous implementation of M&E. For the same reasons, the benefits of M&E may go unrecognized already in the proposal writing stage, resulting in insufficient attention given to it. The IOM Monitoring and Evaluation Guidelines is a good opportunity to correct those impressions and put M&E at the centre of sound performance and fulfilling the duty of accountability.

As IOM’s global role in addressing migration-related challenges has diversified and expanded, new political and organizational realities have demanded a different conceptualization of M&E, as well as reframed organizational thinking about what it constitutes and its application. These realities include the numerous operational demands, limited resources, accelerated speed of expected response and immediate visibility for impact and accountability, as well as the expected rapid integration of new organizational concepts, such as “value for money” and Theory of Change into daily work. Learning and information-sharing also channel a number of key messages and recommendations to be considered.

IOM’s internal and external environments have also undergone significant changes in recent years, with an increased focus on migration worldwide. As a United Nations-related agency, IOM is a main reference on migration, supporting the attainment of migration-related commitments of the 2030 Agenda for Sustainable Development (Sustainable Development Goals or SDGs) and contributing to the implementation of the Global Compact for Safe, Orderly and Regular Migration. IOM is also an increasingly important contributor to migration data and analysis on a global scale, including for the implementation of the 2030 Agenda, and is praised for its operational and pragmatic approach to managing migration, in line with its mandate and the Migration Governance Framework ( MiGOF ). Furthermore, IOM is internally guided by  the Strategic Vision, which does not supersede IOM’s existing MIGOF. But while MIGOF sets out a set of objectives and principles, it does not set out a focused direction of travel. The Strategic Vision is intended to do this. The Strategic Vision also intends to strengthen IOM’s capacity to contribute to the SDGs or the Global Compact for Migration, as well as other existing cooperative frameworks. This chapter will provide an overview of both monitoring and evaluation as key components and an overview of RBM at IOM; it will also outline the differences between monitoring and evaluation and explain how M&E together are relevant to IOM’s strategic approach and objectives.

  • 2 For the purpose of the IOM Monitoring and Evaluation Guidelines , IOM uses the OECD/DAC definition of beneficiary/ies or people that the Organization seeks to assist as “the individuals, groups, or organisations, whether targeted or not, that benefit directly or indirectly, from the development intervention. Other terms, such as rights holders or affected people, may also be used.” See OECD, 2019 , p. 7. The term beneficiary/ies or people that IOM seeks to assist, will intermittently be used throughout the IOM Monitoring and Evaluation Guidelines , and refers to the definition given above, including when discussing humanitarian context.

Over the last 15 years, international actors have increasingly shifted to RBM . RBM supports better performance and greater accountability by applying a clear plan to manage and measure an intervention, with a focus on the results to be achieved. 3 By identifying, in advance, the intended results of an intervention and how its progress can be measured, managing an intervention and determining whether a difference has genuinely been made for the people concerned becomes better understood and easier to implement.

  • 3 UNEG, 2007.

At IOM , RBM is defined as a management strategy that sets out clear objectives and outcomes to define the way forward, and uses specific indicators to verify the progress made. RBM encompasses the whole project cycle: planning, managing implementation, monitoring, reporting and evaluation. 4

The aim of RBM is to provide valuable information for decision-making and lessons learned for the future, which includes the following:

• Planning, setting the vision and defining a results framework;

• Implementing interventions to achieve the results;

• Monitoring to ensure results are being achieved;

• Encouraging learning through reporting and evaluation.

Among other aspects, an RBM approach requires strong M&E , as well as knowledge management.

  • 4 IOM, 2018a (Internal link only).

In 2011, IOM adopted a conscious RBM approach at the project level as seen in the first edition of the IOM Project Handbook . The 2017 version of the IOM Project Handbook provides yet more detailed guidance on RBM and has made the use of a results matrix a requirement to improve IOM’s work. 5

At a corporate level, IOM has identified a set of global results that it wants to achieve by 2023, using its MiGOF as the basis for the Organization’s work and the Strategic Vision as a “direction of travel”. This is condensed in the Strategic Results Framework ( SRF ). This framework specifies the highest level of desired change IOM would like to achieve. The RBM approach builds a bridge between the framework and IOM’s traditional programmes. This allows IOM to report on the results it has collectively achieved, rather than on the activities performed.

  • 5 See IOM, 2017 (Internal link only)

Monitoring and evaluation are important parts of RBM , based on clearly defined and measurable results, processes, methodologies and tools to achieve results. M&E can be viewed as providing a set of tools to enable RBM, helping decision makers track progress and demonstrate an intervention’s higher-level results. 6 Results-based M&E moves from a focus on the immediate results, such as the successful implementation of activities and production of outputs, to the higher-level results, looking at the achievement of outcomes and impacts. Figure 1.1 shows RBM as a “life cycle approach” within which M&E are incorporated.

Figure 1.1.  Results-based management life cycle

Source : Adapted from United Nations Development Programme, 2009 , p. 10.

  • 6 Kusek and Rist, 2004 . See also UNDG, 2011 .
A management strategy that sets out clear objectives and outcomes to define the way forward, and uses specific indicators to verify the progress made. is seen as taking a life cycle approach, including planning, managing, monitoring, reporting and evaluating. RBM at IOM is a means to further strengthen IOM’s interventions. RBM encourages project developers and managers to clearly articulate an intervention’s objective, the desired change it aims to achieve, what is required to achieve such change, whether the desired change is achieved and how ongoing or future performance can further improve through learning. In essence, M&E supports RBM through monitoring and measuring intervention progress towards predetermined targets, refining implementation, and evaluating changes and results to further improve future interventions.

IOM resources

  • 2017 IOM Project Handbook . Second edition. Geneva (Internal link only).
  • 2018a Results-based management in IOM (Internal link only).
  • 2020a RBM Results Based Management SharePoint (Internal link only).

Other resources

Kusek, J.Z. and R. Rist

  • 2004 Ten Steps to a Results-Based Monitoring and Evaluation System: A Handbook for Development Practitioners . World Bank, Washington, D.C.

Organisation for Economic Co-operation and Development ( OECD )

  • 2019 Better Criteria for Better Evaluation: Revised Evaluation Criteria Definitions and Principles for Use . OECD/Development Assistance Committee ( DAC ) Network on Development Evaluation.

United Nations Development Group ( UNDG )

  • 2011 Results-Based Management Handbook: Harmonizing RBM concepts and approaches for improved development results at country level .

United Nations Development Programme ( UNDP )

  • 2009 Handbook on Planning, Monitoring and Evaluating for Development Results . New York.

United Nations Evaluation Group ( UNEG )

  • 2007 The Role of Evaluation in Results-based Management . Reference document, UNEG/REF(2007)1

Given IOM ’s broad thematic portfolio and the decentralized nature of the Organization, it is important, when implementing an intervention, to provide justification for the implementation, articulate what changes are expected to occur and, moreover, how these are expected to occur. Monitoring helps do just that.

Monitoring can often be confused with reporting, which is one of the components of monitoring. While reporting only refers to the compilation, transfer and distribution of information, monitoring focuses on the collection and analysis, on a regular basis, of the information required for reporting. Therefore, monitoring encompasses the planning , designing , selecting of methods and systematic gathering and analysis of the content , while reporting summarizes that content with the purpose of delivering the relevant information.

IOM defines monitoring as an established practice of internal oversight that provides management with an early indication of progress, or lack thereof, in the achievement of results, in both operational and financial activities. 7 Monitoring can take various shapes, vary in the frequency of its conduct and be tailored to a specific context, which is usually dependent on the intervention’s objectives. In an IOM intervention, there are four key areas for monitoring: activity monitoring, results monitoring, financial monitoring and risk monitoring. 8

                                     Figure 1.2. Scope of monitoring – Four key monitoring areas

Figure 1.2. Scope of monitoring - Four key monitoring areas

Source : Adapted from IOM Regional Office Pretoria M&E presentation on Scope of Monitoring (2017).

While these are the four essential areas to monitor at IOM, additional types of monitoring are outlined in chapter 3 of the IOM Monitoring and Evaluation Guidelines .

In order to standardize its approach to monitoring, IOM has developed relevant standardized tools: (a) IOM Results Matrix; and (b) Results Monitoring Framework. 9 Despite this, it may still be a challenge for IOM staff to tailor these tools and adapt them to the monitoring needs of the diverse portfolio of context-specific interventions it implements and migration needs. Therefore, how to monitor within IOM largely depends on how IOM responds to particular migration-related needs within an intervention. Monitoring should be sufficiently flexible to then allow for an assessment of whether interventions respond to emerging needs.

  • 7 IOM, 2018b , p. 2.
  • 8 Modules 2 and 4 of IOM Project Handbook . Further information can be found in chapter 3 of the IOM Monitoring and Evaluation Guidelines .
  • 9 See the IOM Results Matrix section of chapter 3 for a detailed description of each of these tools.

Monitoring is necessary, because it continuously generates the information needed to measure progress towards results throughout implementation and enables timely decision-making. Monitoring helps decision makers be anticipatory and proactive, rather than reactive, in situations that may become challenging to control. It can bring key elements of strategic foresight to IOM interventions.

Monitoring is undertaken on an ongoing basis during the implementation of an intervention. Where possible, it is essential to ask relevant “monitoring questions” regularly.

• Planned activities are actually taking place (within the given time frame);

• There are gaps in the implementation;

• Resources have been/are being used efficiently;

• The intervention’s operating context has changed.

While implementing activities:

When measuring results:

Monitoring is an established practice of internal oversight that provides management with an early indication of progress, or lack thereof, in the achievement of results, in both operational and financial activities. Monitoring at IOM is a routine – but important – process of data collection and analysis, as well as an assessment of progress towards intervention objectives. In other words, it allows for the frequent assessment of the implementation process within IOM interventions. Due to the different thematic areas and diverse approaches to responding to country, regional or global needs and expectations, a standardized approach to monitoring IOM interventions remains challenging. Monitoring needs to be flexible enough to assess whether and how IOM’s interventions are responding to emerging needs. Chapters 2, 3 and 4 of the will provide more details on how monitoring achieves this.
  • 10 IOM, 2018b , p. 2.

    2017 Module 2 and Module 4 . In: IOM Project Handbook . Second edition. Geneva (Internal link only).

    2018b Monitoring Policy . IN/31. 27 September.

International Federation of Red Cross and Red Crescent Societies ( IFRC )

    2011 Project/Programme Monitoring and Evaluation (M&E) Guide . Geneva.

While monitoring may ask the questions, “What is the current status of implementation? What has been achieved so far? How has it been achieved? When has it been achieved?”, evaluation helps, in addition, to understand why and how well something was achieved, and gives judgement on the worth and merit of an intervention. Evaluation allows for a more rigorous analysis of the implementation of an intervention, also looking at why one effort worked better than another. Evaluation enriches learning processes and improves services and decision-making capability for those involved in an intervention. It also provides information not readily available from monitoring, which can be derived from the use of evaluation criteria, such as in-depth consideration for impact, relevance, efficiency, effectiveness, coverage, coordination, sustainability, connectedness and coherence.

IOM defines evaluation as the systematic and objective assessment of an ongoing or completed intervention, including a project, programme, strategy or policy, its design, implementation and results.

Evaluation can be considered a means to discuss causality. While monitoring may show whether indicators have progressed, it remains limited in explaining, in detail, why a change occurred. Evaluation, on the other hand, looks at the question of what difference the implementation of an activity and/or intervention has made. It helps answer this question by assessing monitoring data that reflects what has happened and how , to identify why it happened. Evaluation provides practitioners with the required in-depth and evidence-based data for decision-making purposes, as it can assess whether, how, why and what type of change has occurred during an intervention.

Evaluation is also critical to assess the relevance and performance of the means and progress towards achieving change. Effective conduct and the use of credible evaluations go hand in hand with a culture of results-oriented, evidence-driven learning and decision-making. When evaluations are used, they contribute not only to accountability, but also to creating space for reflection, learning and the sharing of findings, innovations and experiences. They are a source of reliable information to help improve IOM ’s service provision to beneficiaries, migrants, Member States and donors. Findings, lessons learned and best practices from previous evaluations can also help enhance an intervention design and enrich the formulation of results and the results framework. Evaluations have their own methodological and analytical rigour, determined at the planning stage and depending on their intention and scope.

An evaluation can be conducted at every stage of the intervention cycle, depending on the type of evaluation being implemented. For example, an ex-ante evaluation conducted during the conceptualization phase of an intervention can set a strong foundation for a successful implementation. Evaluations conducted during implementation (for instance, real-time and midterm evaluations ) are good sources for p r o viding f eedback on the s t atus and p r og r es s , s t r en g ths or w ea k nesses of implemen t ation. 11 ,   12   In this sense, evaluations provide decision makers with timely information to make adjustments, as required.

  • 11 An ex-ante evaluation assesses the validity of the design, target populations and objectives of an evaluation. For more information, see the section “Types of evaluation” in chapter 5 .
  • 12 A real-time evaluation provides instant feedback to intervention managers about an ongoing evaluation. A midterm evaluation is carried out for the purpose of improving intervention performance or, in some cases, to amend an intervention’s objective. For more information, see also the section “Types of evaluation” in chapter 5 .

Evaluation should not be confused with concepts, such as review, assessment, needs assessments/appraisals or audit. Refer to the following definitions: 13

Review According to the glossary, a is “an assessment of the performance of an intervention, periodically or on an ad hoc basis”. A review is more extensive than monitoring but less than evaluation.
Assessment An can commonly be defined as the action of estimating the nature, ability or quality of something. In the context of development interventions, it is often associated with another term to focus on what will be assessed, such as needs assessment, skills assessment, context assessment and results-based assessment. It can take place prior, during or after an intervention and may be used in an evaluative context.
Needs assessments and appraisals are tools enabling decision makers to choose and decide between optional activities, as well as refine the final design of a project or programme.
Audit as an activity of supervision verifying whether the existing policies, norms and instruments are being applied and used adequately. Audit also examines the adequacy of organizational structures and systems and performs risk assessments. The audit focuses on the accountability and control of the efficient use of resources.
  • 13 Adapted from IOM, 2018c .
  • 14 Adapted from OECD, 2010 , p. 34.

2018c IOM Evaluation Policy . Office of the Inspector General. September.

    2010 Glossary of Key Terms in Evaluation and Results Based Management . OECD/DAC , Paris.

Although often grouped together, M&E are two distinct but related functions. Recognizing the difference between monitoring and evaluation helps those implementing interventions understand that the two are indeed complimentary, as well as mutually beneficial functions. The main difference between them is their focus of assessment , as well as the timing in which each is conducted .

Monitoring , on the one hand, focuses on whether the implementation is on track to achieving its intended results and objectives, in line with established benchmarks. Evaluation , on the other hand, can provide evidence on whether the intervention and its approach to implementation is the right one, and if so, how and why changes are taking place. Evaluation also highlights the strengths and weaknesses of the design of the intervention. In other words, while monitoring can provide information on how the implementation is doing, evaluation can go a step further and demonstrate whether the expected change has been attained, whether the intervention contributed to that change ( impact analysis/evaluation ) and whether the intervention itself and its approach were the most suited to address the given problem.

In terms of timing, while monitoring tracks an intervention’s progress and achievement of results on an ongoing basis, throughout implementation, evaluation is usually a one-off activity, undertaken at different points of an intervention’s life cycle.

Keeping the vertical logic in mind when monitoring an intervention is useful, as it can help understand the specific level of result, which is being monitored, and, moreover, how individual results contribute to the overall implementation objectives. 15 In this sense, monitoring can function as a tool that can help review the management objectives. Similarly, when evaluating an intervention, it is important to consider its vertical logic to enable a more holistic approach to evaluation.

The following two diagrams show monitoring and evaluation in relation to the vertical logic. Chapter 3 of the IOM Monitoring and Evaluation Guidelines will further elaborate the vertical logic. Note that the two diagrams include indicative questions that pertain to monitoring and evaluation, and that there may be many other questions applicable in the context of vertical logic that are not included in the following figures.

                                                          Figure 1.3. Monitoring and vertical logic

Figure 1.3. Monitoring and vertical logic

                                                             Figure 1.4. Evaluation and vertical logic

Figure 1.4. Evaluation and vertical logic

Source : Adapted from IFRC, 2011 . See also OECD, n.d.

  • 15 Vertical logic refers to the means–end relationship between activities and results, as well as the relationship between the results and their contribution to the broader objective ( Module 2 of IOM Project Handbook , p. 122) (Internal link only). For more information on vertical logic, see the section, “The IOM Results Matrix” in chapter 3 of the IOM Monitoring and Evaluation Guidelines .
Key differences between monitoring and evaluation
Monitoring Evaluation
Monitoring is the continuous, systematic collection of data/information throughout the implementation of an intervention as part of intervention management. It focuses on the implementation of an intervention, comparing what is delivered to what was planned Evaluation is a scheduled, periodic and in-depth assessment at specific points in time (before, during, at the end of or after an intervention). It is a specific process that assess this success of an intervention against an established set of evaluation criteria.
It is usually conducted by people directly involved in implementing the intervention. It is usually conducted by people not having directly participated in the intervention.
It routinely collects data against indicators and compares achieved results with targets. It assesses causal contributions of interventions to results and explores unintended results.
It focuses on tracking the progress of regular or day-to-day activities during implementation. It assesses whether, why and how well change has occurred and whether the change can be attributed to the intervention.
It looks at production of results at the output and outcome level. It looks at performance and achievement of results at the output, outcome, as well as the objective level.
It concentrates on planned intervention elements. It assesses planned elements and looks for unplanned change, searches for causes, challenges, risks, assumptions and sustainability

    n.d. OECD DAC Criteria for Evaluating Development Assistance .

This section focuses on the strategic orientation at IOM 16 and how it relates to M&E .

  • 16 The following information regarding strategic orientation is partially based on IOM, 2016a (Internal link only).

What it states

The Strategic Vision spans 2019–2023 and is the Director General’s articulation of how IOM as an organization needs to develop over a five-year period in order to meet new and emerging responsibilities at the global, regional, country and project levels. The Strategic Vision will guide the Organization into the future and turn IOM’s aspirations into reality.

It has a number of different components, including the following:

  • Strategic goals , outlining what IOM should be in 2023;
  • Strategic priorities , based on a landscape assessment of what the next decade will bring, according to three main pillars of work: resilience, mobility and governance (more detailed in the SRF );
  • Drivers for success, outlining areas of institutional development that will be needed to fully realizethe goals of the Organization.

The Strategic Vision is operationalized through the SRF, which defines four overarching global objectives for the Organization, accompanied by a limited number of long-term and short-term outcomes and outputs that articulate how these highest-level objectives will be reached. These high-level results and the key performance indicators that help measure them can and should be used within projects and programmes to ensure alignment with the Strategic Vision and other key global frameworks like the SDGs and the Global Compact for Migration.

  • Internally, the Strategic Vision strengthens corporate identity at a critical moment, offering a common narrative about what is important about IOM’s work, issues in which the Organization expects to engage further, and how it wishes to strengthen as an organization. All staff, and particularly chiefs of mission, play a crucial role in understanding and embodying the vision at the country level.
  • Externally, this document offers staff a framework for engaging in strategic discussion with Member States and other stakeholders and aims to bring coherence to IOM’s external brand.

Here are some ways on how to use the Strategic Vision and the related Strategic Results Framework

     a) Be familiar with the Strategic Vision and the institutional results framework.

     b) Where possible, projects should be aligned to the SRF at the outcome or output levels.

     c) Regional and country offices should align any future country or regional strategies with the Strategic Vision and the SRF, although they still have flexibility to adjust for local needs.

MiGOF 17 was endorsed by IOM Member States at the IOM Council in 2015. MiGOF is now the overarching framework for all of the Organization’s work. MiGOF is linked to the SDGs and represents an ideal for migration governance to which States can aspire.

MiGOF Principles and Objectives

The propose the necessary conditions for migration to be well-managed by creating a more effective environment for maximized results for migration to be beneficial to all. These represent the means through which a State will ensure that the systemic requirements for good migration governance are in place.

The are specific and do not require any further conventions, laws or practices than the ones that are already existing. Taken together, these objectives ensure that migration is governed in an integrated and holistic way, responding to the need to consider mobile categories of people and address their needs for for assistance in the event of an emergency, building resilience of individuals and communities, as well as ensuring opportunities for the economic and social health of the State.

Source : IOM, 2016b .

MiGOF is a migration system that promotes human mobility, which benefits migrants and society, when it:

  • Adheres to international standards and fulfils migrants’ rights;
  • Formulates policy using evidence and a “whole-of-government” approach;
  • Engages with partners to address migration and related issues.

The system also seeks to:

  • Advance the socioeconomic well-being of migrants and society;
  • Effectively address the mobility dimensions of crises;
  • Ensure that migration takes place in a safe, orderly and dignified manner.
  • 17 For more information, see IOM, 2016b .

The SDGs 18 were adopted by the United Nations General Assembly in September 2015. With the SDGs, migration has, for the first time, been inserted into mainstream development policy. The central reference to migration in the 2030 Agenda is Target 10.7 under the goal “Reduce inequality in and among countries”. It is a call to “facilitate orderly, safe, regular and responsible migration and mobility of people, including through the implementation of planned and well-managed migration policies”. However, migration and migrants are directly relevant to the implementation of all the SDGs and many of their targets. The SDGs, and the commitment to leave no one behind and to reach the furthest behind, will not be achieved without due consideration of migration. IOM ’s Migration and the 2030 Agenda: A Guide for Practitioners outlines these interlinkages in detail.

Migration and the 2030 Agenda

  • Establishing IOM’s Institutional Strategy on Migration and Sustainable Development, which is guiding IOM in thenecessary steps to ensure that migration governance can contribute to achieving the 2030 Agenda;
  • Supporting United NationsCountry Team ( UNCT ) and Member States integrate migration considerations into Common Country Analysis (CCAs) and United Nations Sustainable Development Cooperation Framework ( UNSDCF );
  • Supporting Member States to measure and report on migration governance within Voluntary National Reviews for the High-Level Political Forum dedicated to reviewing progression the 2030 Agenda;
  • Implementingjoint programming with other UN agencies  and actors to ensure development actions are coherent with and complementary to efforts toensure good migration governance;
  • Providing development actors and donors with the tools and support to integrate migration into development cooperation efforts for enhanced aid effectiveness;
  • Supporting Member States to mainstream migration into policy planning and programming across sectors and general development planning for enhanced development impact;
  • Furthering global dialogue and exchange on migration and sustainable development by supportingfora and platforms such as the Global Forum on Migration and Development;
  • Developing tools to analyse gaps in migration governance such as the Migration GovernanceIndicators;
  • Developing tools and providing technical assistance within the context of the UN Network onMigration to help governments and UNCTs leverage the implementation of the Global Compact for Migration for sustainable development outcomes.
  • 18 IOM, 2018d .

As part of IOM ’s effort to track progress on the migration aspects of the SDGs, IOM and the Economist Intelligence Unit  p ublished a Migration Governance Index in 2016. Based on MiGOF categories, the Index, which is the first of its kind, provides a framework for countries to measure their progress towards better migration governance at the policy level.

What do the Sustainable Development Goals mean for IOM ’s work and monitoring and evaluation?

Within IOM’s institutional strategy on migration and sustainable development, IOM has committed to three main outcomes: (a) human mobility is increasingly a choice; (b) migrants and their families are empowered; and (c) migration is increasingly well-governed. To achieve these outcomes, IOM has committed to four institutional outputs: (a) improved policy capacity on migration and sustainable development through a more robust evidence base and enhanced knowledge management; (b) stronger partnerships across the United Nations development system and beyond that harness the different expertise and capabilities of relevant actors on migration and sustainable development; (c) increased capacity to integrate migration in t he pla nnin g , implemen t ation, monitoring a nd r epo r ting of t he 2030 A genda ; a nd ( d) high - qua li t y migration programming that contributes to positive development outcomes.

In relation to output (a), having a stronger evidence base on migration and sustainable development is crucial if the development potential of migration will be capitalized. Enhancing IOM’s capacity to apply quality M&E in its programming from a development perspective will be crucial in this regard. This will also help enhance IOM’s capacity to showcase how its work supports the achievement of the 2030 Agenda through high-quality programming that contributes to development outcome, as outlined in output (d). IOM also has the responsibility to support its Member States achieve the same and ensure that monitoring, evaluation and reporting on migration governance efforts is aligned with and contribute to their efforts to achieve the 2030 Agenda. Thus, output (b) on building stronger partnerships across the United Nations development system and beyond will be crucial to ensure that migration is firmly featured in UNSDCF and other development agendas, as well as national and local policies and programming. IOM’s role as coordinator of the United Nations Network on Migration will allow the Organization to achieve this within UNCTs. IOM has developed an action plan to achieve all of this and which is driven by IOM’s Migration and Sustainable Development Unit and overseen by IOM’s organization-wide Working Group on the SDGs.

UN Sustainable Development Cooperation Framework

The UNSDCF 19 (formerly the United Nations Development Assistance Framework or UNDAF ) is now “the most important instrument for planning and implementation of the United Nations development activities at country level in support of the implementation of the 2030 Agenda for Sustainable Development”. 20

It is a strategic medium-term results framework that represents the commitment of the UNCT of a particular country to supporting that country’s longer-term achievement of the SDGs. Furthermore, it is intended as an instrument that drives strategic planning, funding, implementation, monitoring, learning, reporting and evaluation for the United Nations, in partnership with host governments and other entities.

The UNSDCF explicitly seeks to ensure that government expectations of the United Nations development system will drive its contributions at the country level and that these contributions emerge from an analysis of the national landscape vis-à-vis SDG priorities. It is therefore “the central framework for joint monitoring, review, reporting and evaluation of the United Nations development system’s impact in a country achieving the 2030 Agenda [for Sustainable Development]”. 21

For more information regarding the UNSDCF, see The Cooperation Framework .

Key recommendations to include migration in the United Nations Sustainable Development Cooperation Framework

  • Establish working relations with the resident coordinator and ensure they are up to date on IOM work.
  • IOM should engage fully with the new generation of UNCTs to ensure that migration issues, including displacement and other effects of crisis, are reflected in CCAs, cooperation frameworks and broader UNCT priorities.
  • IOM should participate in – and where possible lead – any country-level inter-agency coordination forums around the UNSDCF to facilitate the inclusion of the perspectives of migrants and migration- affected communities in all development processes.
  • Introduce IOM strategies and work in countries with cooperation frameworks, aligning outcomes, outputs and indicators. This will also facilitate country-level reporting in UN Info.
  • 19 UNSDG, 2019 .
  • 21 Ibid., p.8.

MCOF

The Migration Crisis Operational Framework 22 ( MCOF ) was approved by IOM Council in 2012 and combines humanitarian activities and migration management services. Some of the key features of MCOF   are as follows:

  • It is based on international humanitarian and human rights law and humanitarian principles.
  • It combines 15 sectors of assistance related to humanitarian activities and migration management services.
  • It covers pre-crisis preparedness, emergency response and post-crisis recovery.
  • It complements existing international systems (such as cluster approach) and builds on IOM’s partnerships.

MCOF helps crisis-affected populations, including displaced persons and international migrants stranded in crisis situations in their destination/transit countries, to better access their fundamental rights to protection and assistance.

What does Migration Crisis Operational Framework mean for IOM's work and monitoring and evaluation?

MCOF should be adapted to each context and can be used for analysing the migration patterns in a country and developing a strategic direction of a country together with MiGOF . Projects and programmes should be aligned to MCOF, and MCOF strategy progress should be monitored through specific and measurable results

  • 22 IOM, 2012.

  What it states

The Global Compact for Migration is the first intergovernmentally negotiated agreement, prepared under the auspices of the United Nations, covering all dimensions of international migration in a holistic and comprehensive manner. It is a non-binding document that respects States’ sovereign right to determine who

enters and stays in their territory and demonstrates commitment to international cooperation on migration. It presents a significant opportunity to improve the governance of migration to address the challenges associated with today’s migration, as well as strengthen the contribution of migrants and migration to sustainable development. The Global Compact for Migration is framed in a way consistent with Target 10.7 of the 2030 Agenda in which Member States commit to cooperate internationally to facilitate safe, orderly and regular migration. The Global Compact for Migration is designed to:

  • Support international cooperation on the governance of international migration;
  • Provide a comprehensive menu of options for States from which they can select policy options to address some of the most pressing issues around international migration;
  • Give States the space and flexibility to pursue implementation based on their own migration realities and capacities.

The Global Compact for Migration contains 23 objectives for improving migration management at all levels of government. The 23 objectives can be found in paragraph 16 of the United Nations General Assembly Resolution adopting the Global Compact for Safe, Orderly and Regular Migration. 23

  • 23 United Nations, 2018b .

    2012 Resolution No. 1243 on Migration Crisis Operational Framework . Adopted on 27 November.

    2016a IOM Chiefs of Mission Handbook 2016 . Geneva (Internal link only).

    2016b Migration Governance Framework . Brochure. Geneva.

    2018d Migration and the 2030 Agenda: A Guide for Practitioners . Geneva.

    2020b Strategic Vision: Setting a Course for IOM . Geneva.

United Nations

    2018a United Nations General Assembly Resolution 72/279 on Repositioning of the United Nations development system in the context of the quadrennial comprehensive policy review of operational activities for development of the United Nations System . Adopted on 31 May (A/RES/72/279).

    2018b United Nations General Assembly Resolution 73/195 on the Global Compact for Safe, Orderly and Regular Migration . Adopted on 19 December (A/RES/73/195).

    n.d. United Nations Sustainable Development Goals .

United Nations Sustainable Development Group ( UNSDG )

    2019 United Nations Sustainable Development Cooperation Framework – Internal Guidance .

The Peak Performance Center

The Peak Performance Center

The pursuit of performance excellence, critical thinking.

Critical Thinking header

Critical thinking refers to the process of actively analyzing, assessing, synthesizing, evaluating and reflecting on information gathered from observation, experience, or communication. It is thinking in a clear, logical, reasoned, and reflective manner to solve problems or make decisions. Basically, critical thinking is taking a hard look at something to understand what it really means.

Critical Thinkers

Critical thinkers do not simply accept all ideas, theories, and conclusions as facts. They have a mindset of questioning ideas and conclusions. They make reasoned judgments that are logical and well thought out by assessing the evidence that supports a specific theory or conclusion.

When presented with a new piece of new information, critical thinkers may ask questions such as;

“What information supports that?”

“How was this information obtained?”

“Who obtained the information?”

“How do we know the information is valid?”

“Why is it that way?”

“What makes it do that?”

“How do we know that?”

“Are there other possibilities?”

Critical Thinking

Combination of Analytical and Creative Thinking

Many people perceive critical thinking just as analytical thinking. However, critical thinking incorporates both analytical thinking and creative thinking. Critical thinking does involve breaking down information into parts and analyzing the parts in a logical, step-by-step manner. However, it also involves challenging consensus to formulate new creative ideas and generate innovative solutions. It is critical thinking that helps to evaluate and improve your creative ideas.

Critical Thinking Skills

Elements of Critical Thinking

Critical thinking involves:

  • Gathering relevant information
  • Evaluating information
  • Asking questions
  • Assessing bias or unsubstantiated assumptions
  • Making inferences from the information and filling in gaps
  • Using abstract ideas to interpret information
  • Formulating ideas
  • Weighing opinions
  • Reaching well-reasoned conclusions
  • Considering alternative possibilities
  • Testing conclusions
  • Verifying if evidence/argument support the conclusions

Developing Critical Thinking Skills

Critical thinking is considered a higher order thinking skills, such as analysis, synthesis, deduction, inference, reason, and evaluation. In order to demonstrate critical thinking, you would need to develop skills in;

Interpreting : understanding the significance or meaning of information

Analyzing : breaking information down into its parts

Connecting : making connections between related items or pieces of information.

Integrating : connecting and combining information to better understand the relationship between the information.

Evaluating : judging the value, credibility, or strength of something

Reasoning : creating an argument through logical steps

Deducing : forming a logical opinion about something based on the information or evidence that is available

Inferring : figuring something out through reasoning based on assumptions and ideas

Generating : producing new information, ideas, products, or ways of viewing things.

Blooms Taxonomy

Bloom’s Taxonomy Revised

Mind Mapping

Chunking Information

Brainstorming

critical thinking in monitoring and evaluation

Copyright © 2024 | WordPress Theme by MH Themes

web analytics

tools4dev Practical tools for international development

critical thinking in monitoring and evaluation

10 Reasons Why Monitoring and Evaluation is Important

Monitoring and evaluation are essential to any project or program. Through this process, organizations collect and analyze data, and determine if a project/program has fulfilled its goals. Monitoring begins right away and extends through the duration of the project. Evaluation comes after and assesses how well the program performed. Every organization should have an M&E system in place. Here are ten reasons why:

M&E results in better transparency and accountability

Because organizations track, analyze, and report on a project during the monitoring phase, there’s more transparency. Information is freely circulated and available to stakeholders, which gives them more input on the project. A good monitoring system ensures no one is left in the dark. This transparency leads to better accountability. With information so available, organizations need to keep everything above board. It’s also much harder to deceive stakeholders.

M&E helps organizations catch problems early

Projects never go perfectly according to plan, but a well-designed M&E helps the project stay on track and perform well. M&E plans help define a project’s scope, establish interventions when things go wrong, and give everyone an idea of how those interventions affect the rest of the project. This way, when problems inevitably arise, a quick and effective solution can be implemented.

M&E helps ensure resources are used efficiently

Every project needs resources. How much cash is on hand determines things like how many people work on a project, the project’s scope, and what solutions are available if things get off course. The information collected through monitoring reveals gaps or issues, which require resources to address. Without M&E, it wouldn’t be clear what areas need to be a priority. Resources could easily be wasted in one area that isn’t the source of the issue. Monitoring and evaluation helps prevent that waste.

M&E helps organizations learn from their mistakes

Mistakes and failures are part of every organization. M&E provides a detailed blueprint of everything that went right and everything that went wrong during a project. Thorough M&E documents and templates allow organizations to pinpoint specific failures, as opposed to just guessing what caused problems. Often, organizations can learn more from their mistakes than from their successes.

M&E improves decision-making

Data should drive decisions. M&E processes provide the essential information needed to see the big picture. After a project wraps up, an organization with good M&E can identify mistakes, successes, and things that can be adapted and replicated for future projects. Decision-making is then influenced by what was learned through past monitoring and evaluation.

M&E helps organizations stay organized

Developing a good M&E plan requires a lot of organization. That process in itself is very helpful to an organization. It has to develop methods to collect, distribute, and analyze information. Developing M&E plans also requires organizations to decide on desired outcomes, how to measure success, and how to adapt as the project goes on, so those outcomes become a reality. Good organizational skills benefit every area of an organization.

M&E helps organizations replicate the best projects/programs

Organizations don’t like to waste time on projects or programs that go nowhere or fail to meet certain standards. The benefits of M&E that we’ve described above – such as catching problems early, good resource management, and informed decisions – all result in information that ensures organizations replicate what’s working and let go of what’s not.

M&E encourages innovation

Monitoring and evaluation can help fuel innovative thinking and methods for data collection. While some fields require specific methods, others are open to more unique ideas. As an example, fields that have traditionally relied on standardized tools like questionnaires, focus groups, interviews, and so on can branch out to video and photo documentation, storytelling, and even fine arts. Innovative tools provide new perspectives on data and new ways to measure success.

M&E encourages diversity of thought and opinions

With monitoring and evaluation, the more information the better. Every team member offers an important perspective on how a project or program is doing. Encouraging diversity of thought and exploring new ways of obtaining feedback enhance the benefits of M&E. With M&E tools like surveys, they’re only truly useful if they include a wide range of people and responses. In good monitoring and evaluation plans, all voices are important.

Every organization benefits from M&E

While certain organizations can use more unique M&E tools, all organizations need some kind of monitoring and evaluation system. Whether it’s a small business, corporation, or government agency, all organizations need a way to monitor their projects and determine if they’re successful. Without strong M&E, organizations aren’t sustainable, they’re more vulnerable to failure, and they can lose the trust of stakeholders.

About Emmaline Soken-Huberty

' src=

Related Articles

critical thinking in monitoring and evaluation

Apply now: Master of Science in Engineering, Sustainability and Health

15 May 2024

critical thinking in monitoring and evaluation

Top 10 Websites to find Monitoring and Evaluation Jobs

12 August 2023

critical thinking in monitoring and evaluation

Monitoring and Evaluation Tools for NGOs

6 August 2023

This website may not work correctly because your browser is out of date. Please update your browser .

  • Triangulation

Triangulation facilitates validation of data through cross verification from more than two sources.

It tests the consistency of findings obtained through different instruments and increases the chance to control, or at least assess some of the threats or multiple causes influencing our results.

Triangulation is not just about validation but about deepening and widening one’s understanding. It can be used to produce innovation in conceptual framing. It can lead to multi-perspective meta-interpretations. [Triangulation is an] attempt to map out, or explain more fully, the richness and complexity of human behavior by studying it from more than one standpoint? - Cohen and Manion

Denzin (1973, p.301) proposes four basic types of triangulation:

  • Data triangulation:  involves time, space, and persons
  • Investigator triangulation:  involves multiple researchers in an investigation
  • Theory triangulation:  involves using more than one theoretical scheme in the interpretation of the phenomenon
  • Methodological triangulation:  involves using more than one option to gather data, such as interviews, observations, questionnaires, and documents.

Reasons for triangulation

Carvalho and White (1997) propose four reasons for undertaking triangulation:

  • Enriching:  The outputs of different informal and formal instruments add value to each other by explaining  different aspects of an issue
  • Refuting:  Where one set of options disproves a hypothesis generated by another set of options.
  • Confirming:  Where one set of options confirms a hypothesis generated by another set of options
  • Explaining:  Where one set of options sheds light on unexpected findings derived from another set of options.

Triangulation to minimize bias

The problem with relying on just one option is to do with bias. There are several types of bias encountered in research, and triangulation can help with most of them.

  • Measurement bias – Measurement bias is caused by the way in which you collect data. Triangulation allows you to combine individual and group research options to help reduce bias such as peer pressure on focus group participants.
  • Sampling bias – Sampling bias is when you don’t cover all of the population you’re studying (omission bias) or you cover only some parts because it’s more convenient (inclusion bias). Triangulation combines the different strengths of these options to ensure you getting sufficient coverage.
  • Procedural bias – Procedural bias occurs when participants are put under some kind of pressure to provide information. For example, doing “vox pop” style interrupt polls might catch the participants unaware and thus affect their answers. Triangulation allows us to combine short engagements with longer engagements where participants have more time to give considered responses.

Using an evaluation matrix to check triangulation

An evaluation matrix, as shown below, will help you check that the planned data collection will cover all the  KEQs , see if there is sufficient  triangulation  between different data sources, and help you design questionnaires, interview schedules, data extraction tools for project records, and observation tools, to ensure they gather the necessary data.

  Participant Questionnaire Key Informant Interviews Project Records Observation of program implementation
KEQ1 What was the quality of implementation?
KEQ2 To what extent were the program objectives met?  
KEQ3 What other impacts did the program have?    
KEQ4 How could the program be improved?  

Carvalho, S. and White, H. (1997).  Combining the quantitative and qualitative approaches to poverty measurement and analysis: The practice and the potential . World Bank Technical Paper 366. Washington, D.C.: World Bank 

Cohen, L. & Manion, L. Research methods in education .  Routledge.

Denzin, Norman K. (1973).  The research act: A theoretical  introduction to sociological methods.  New Jersey: Transaction Publishers.

Kennedy, Patrick. (2009).  How to combine multiple research options: Practical Triangulation.  http://johnnyholland.org/2009/08/20/practical-triangulation  (archived link)

Expand to view all resources related to 'Triangulation'

  • Introduction to mixed methods in impact evaluation

This page is a Stub (a minimal version of a page). You can help expand it. Contact Us  to recommend resources or volunteer to expand the description.

'Triangulation' is referenced in:

  • Qualitative impact assessment protocol
  • 52 weeks of BetterEvaluation: Week 31: A series on mixed methods in evaluation
  • Week 8: Guest blog: Innovation in development evaluation

Framework/Guide

  • Rainbow Framework :  Combine qualitative and quantitative data
  • Strategies to reduce costs
  • Choose methods and processes
  • Evaluating policy influence and advocacy

Back to top

© 2022 BetterEvaluation. All right reserved.

Linking planning with monitoring & evaluation – closing the loop

Planning, monitoring and evaluation are at the heart of a learning-based approach to management. Achieving collaborative, business/environmental or personal goals requires effective planning and follow-through. The plan is effectively a “route-map” from the present to the future. To plan a suitable route you must know where you are (situation analysis) and where you want to go (establish goals and identify outcomes). Only then can appropriate action plans be developed to help achieve the desired future.

However, because the future is uncertain, our action plans must be adaptive and allow continually for “learning by doing”. To do this we need appropriate monitoring and evaluation (m&e) tools and processes, and information flows that help the different stakeholders involved check that their efforts are proceeding as planned, and to refine and guide their responses if changes are needed.

Both sets of plans are best developed in conjunction with the people who will carry them out, as they are then more likely to actually do so. As the accompanying diagram shows, there are two sets of monitoring plans needed. Results monitoring focusses on whether you are getting where you want to go, while process monitoring focusses on how efficiently you are getting there.

Worldwide there is a trend towards an increased use of indicators to monitor development and track progress. Indicators quantify and simplify phenomena, and help us understand and make sense of complex realities. Indicators in this regard may be either qualitative or quantitative, and a combination of the two is often best. An evaluation is like a good story, it needs some (qualitative) anecdotal evidence to put the lesson in context, and some (quantitative) facts and figures that reinforce the message.

Theories of change and logic models

Often people talk about logic models and theory of change processes interchangeably, Logic models – such as the ones above – connect programmatic activities to client or stakeholder outcomes. But a theory of change goes further, specifying how to create a range of conditions that help programmes deliver on the desired outcomes. These can include setting out the right kinds of partnerships, types of forums, particular kinds of technical assistance, and tools and processes that help people operate more collaboratively and be more results focused.

The diagram below shows an outcomes or logic model approach to project planning. This describes logical linkages among programme resources, activities, outputs, and audiences, and highlights different orders of outcomes related to a specific problem or situation. Importantly, once a programme has been described in terms of the logic model, critical measures of performance can be identified. In this way logic models can be seen to support both planning and evaluation. As the diagram below shows different evaluation types/approaches can be used to measure different parts of the overall project or change initiative.

With the broader monitoring and evaluation context there are a number of framings that will influence the final approaches chosen. These are highlighted on different pages here. Different types of evaluations answer different questions . Related frameworks can provide ideas as to the scale and levels of programme intensity to be considered by stakeholders. Within these overall framings more examples of different evaluation techniques that can be used can be found from the different approaches page.

Monitoring, evaluation and learning (MEL)

A common focus of monitoring and evaluation (M&E) is to report on how you are tracking against outputs and short or long-term outcomes. While reporting on outputs and outcomes is an important purpose for projects, learning from these activities is equally important. A monitoring, evaluation and learning (MEL) approach encourages and fosters a focus on monitoring performance, selectively evaluating activities, and supporting continuous learning .

MEL assists organizations to clarify intentions, to collect essential data for measuring efficiency and impact, and to identify and monitor levers for change. Ideally, MEL processes also include a realistic evaluation of capability and capacity (internally and externally across the decision-making setting) to respond and adapt with agility.

Other related site pages include links to guides on managing participation and improving facilitation. By fostering good participation in planning, monitoring and evaluation we can support empowerment, motivation and strengthened relationships . This can help in supporting innovative project and development approaches through social learning , the improved use of indicators to monitor development and track progress, and adaptive management .

Share

myjobmag logo

  • South Africa

Monitoring and Evaluation Officer Job Description

Who is a monitoring and evaluation officer.

critical thinking in monitoring and evaluation

As a Monitoring and Evaluation Officer, you will develop and implement monitoring and evaluation frameworks, collect and analyze data, and generate reports to inform decision-making and improve program outcomes. Your role involves working closely with project teams, stakeholders, and partners to ensure accountability and learning.

Responsibilities:

  • Develop monitoring and evaluation frameworks, plans, and indicators for projects or programs.
  • Design data collection tools, surveys, and methodologies to gather qualitative and quantitative data.
  • Implement data collection activities, including surveys, interviews, focus group discussions, and observations.
  • Analyze data using statistical and qualitative analysis techniques to assess program performance and outcomes.
  • Prepare and present reports, dashboards, and visualizations to communicate findings and recommendations to stakeholders.
  • Monitor project activities, outputs, and outcomes against established targets and benchmarks.
  • Conduct field visits and assessments to verify data quality, monitor progress, and identify challenges.
  • Support project teams in setting up data management systems and databases to store and analyze data.
  • Provide training and capacity building to project staff and partners on monitoring and evaluation concepts and tools.
  • Facilitate learning and knowledge sharing sessions to capture best practices and lessons learned from program implementation.
  • Collaborate with stakeholders, including donors, government agencies, and community partners, to ensure alignment of monitoring and evaluation efforts.
  • Participate in program planning, review, and evaluation meetings to contribute insights and recommendations.
  • Ensure compliance with ethical standards, data privacy regulations, and organizational policies in monitoring and evaluation activities.
  • Contribute to the development and refinement of monitoring and evaluation methodologies and tools.
  • Support organizational learning and continuous improvement efforts through feedback mechanisms and reflective practices.

Requirements and Qualifications:

  • Bachelor's degree in social sciences, international development, statistics, or related field; master's degree preferred.
  • Proven experience in monitoring and evaluation roles, preferably in the development or nonprofit sector.
  • Strong understanding of monitoring and evaluation concepts, frameworks, and methodologies.
  • Proficiency in quantitative and qualitative data analysis techniques and tools.
  • Excellent analytical, critical thinking, and problem-solving skills.
  • Ability to communicate complex concepts and findings effectively to diverse audiences.
  • Experience with data management software, statistical analysis software, and data visualization tools.
  • Knowledge of project management principles and practices.
  • Strong interpersonal skills and ability to work collaboratively in multicultural settings.
  • Commitment to transparency, accountability, and learning.

Required Skills:

  • Data analysis
  • Monitoring and evaluation
  • Research skills
  • Communication skills
  • Project management
  • Problem-solving abilities
  • Data management
  • Critical thinking
  • Adaptability

Frequently Asked Questions

What does a monitoring and evaluation officer do?

A Monitoring and Evaluation Officer designs, implements, and manages monitoring and evaluation systems to assess the effectiveness and impact of programs, projects, or interventions, collecting and analyzing data to inform decision-making and improve program outcomes.

What qualifications are required to become a monitoring and evaluation officer?

Typically, a bachelor's degree in social sciences, international development, statistics, or a related field is required, along with proven experience in monitoring and evaluation roles. Strong analytical, communication, and project management skills are essential for success in this role.

Want to hire for this role?

Looking for monitoring and evaluation officer job?

  • Project Cordinator
  • Project Director
  • Senior Project Manager
  • Program Manager
  • Project Manager
  • Maintenance Manager
  • Job Descriptions
  • HR Glossary
  • Job Listing

Subscribe to Job Alert

Join our happy subscribers

  • @outlook.com

American Psychological Association Logo

Teaching critical thinking in digital spaces

Vol. 55 No. 6 Print version: page 10

  • Misinformation and Disinformation
  • Social Media and Internet

students using laptops in a classroom

  • There’s a movement afoot to equip K–12 students with the skills they need to identify misinformation on social media.
  • Psychologists are a key part of the effort to help youth build digital literacy skills and create science-backed digital literacy tools for educators.
  • Efforts to improve digital literacy among youth will help protect the next generation from the spread of false information online and guide youth on how to use social media safely.

At least 21 state legislatures have taken steps to reform K–12 media and information literacy education, with California, Delaware, Illinois, and New Jersey passing comprehensive reforms ( U.S. Media Literacy Policy Report, Media Literacy Now , 2024 ). The largely bipartisan efforts are a response to challenges that most school curriculums do not yet address or teach—skills like sorting out what is true or false online, identifying when content is produced by artificial intelligence (AI), and how to use social media safely.

“We’ve all seen how the spread of online misinformation and disinformation is growing and that it has real-world consequences,” said Assemblymember Marc Berman, JD, an attorney who represents California’s 23rd District and spearheaded the state’s digital literacy education law. “I can’t force adults to go back to school and take media literacy, but at a minimum, we can make sure that our young people are getting the skills they need for today’s world.”

People of all ages are susceptible to misinformation, but youth—who spend an average of 4 to 6 hours per day online— say they need help . In one survey of young adults in Canada, 84% were unsure they could distinguish fact from fiction on social media ( Youth Science Survey , Canada Foundation for Innovation, 2021 ). In a study led by educational psychologist Sam Wineburg, PhD, 82% of middle school students could not tell the difference between an online news story and an advertisement ( Evaluating Information: The Cornerstone of Civic Online Reasoning , Stanford Digital Repository, 2016 ).

“It’s those kinds of findings that have gotten the attention of legislators,” said Wineburg, who is an emeritus professor at Stanford University and cofounder of the Digital Inquiry Group (DIG), a nonprofit that creates free research-backed digital literacy tools for educators.

“Increasingly, as young people’s apps of choice are TikTok and YouTube, the adults have woken up to the fact that quality information is to civic understanding what clean air and water are to civic health,” Wineburg said.

The most comprehensive programs, which are now being developed and tested for K–12 audiences, also aim to teach students how to locate and assess the source of online information and to think critically about how generative AI produces content. They also teach students about digital citizenship, which involves engaging respectfully with others online.

Psychologists are a key part of those efforts. In its 2023 Health Advisory on Social Media Use in Adolescence , APA recommended psychologically informed media literacy training for youth, guidance echoed by U.S. Surgeon General Vivek H. Murthy. What is needed now is ongoing research on what works, as well as strong collaboration with journalists, educators, and policymakers to swiftly put research insights into practice.

This year, APA also released an updated scientific roundup focused on the risks of social media content, features, and functions . The report also provides concrete recommendations for minimizing psychological harm, including tips for monitoring use.

“To me, this is really one of the most important things we can be doing right now as psychologists, given how misinformation has made science political in ways that are really frightening,” said Susan Nolan, PhD, a professor of psychology at Seton Hall University in New Jersey who studies and advocates for scientific literacy.

Media literacy reform

While social media platforms typically require users to be 13 or older, most adolescents create accounts before then, at a time when their brains are particularly vulnerable to social influence ( The Common Sense Census: Plugged-In Parents of Tweens and Teens, Common Sense Media , 2016 ). In addition to the interpersonal risks of getting online, surveys show that adolescents are more likely to believe conspiracy theories than adults—particularly those adolescents who spend a lot of time on social media (“ Belief in Conspiracy Theories Higher Among Teenagers Than Adults, as Majority of Americans Support Social Media Reform, New Polling Finds ,” Center for Countering Digital Hate, Aug. 16, 2023).

“Media literacy is literacy in the 21st century, and we don’t start teaching literacy in high school,” said Erin McNeill, founder and CEO of Media Literacy Now , an organization dedicated to K–12 media literacy reform. “It’s an essential life skill that has to be built on a foundation, not rolled out at the last minute.”

Psychological research has played an important role in demonstrating the need for starting media literacy training early and in passing corresponding educational reforms at the state level. In a 2021 study by Wineburg and his colleagues, 3,446 census-matched high school students were tasked with investigating a website, CO2 Science , and evaluating whether it provided reliable information about human-induced climate change. Only 4% of students discovered that the site’s chief sponsor was ExxonMobil ( Educational Researcher, Vol. 50, No. 8, 2021 ).

More than half of the students in the study also believed that a Facebook video that appeared to show ballot stuffing, shot in Russia and posted anonymously, was “strong evidence” of U.S. voter fraud.

“We leaned on these studies when justifying the legislation because they show how the internet and social media make it a lot easier to select only the information that supports our preexisting beliefs, rather than providing a more balanced view,” said Berman, who also pointed to APA’s 2023 Health Advisory on Social Media Use in Adolescence to support the need for policy reform.

Drawing on psychological research, APA’s latest guidance recommends a series of digital literacy competencies that can provide a starting point for policymakers. Those include understanding the tactics used to spread mis- and disinformation, limiting overgeneralizations that lead people to incorrectly interpret others’ beliefs, and helping young people learn to nourish healthy online relationships.

“Developmentally, adolescents are especially vulnerable to the features of social media that are designed to keep users online, such as likes, push notifications, autoplay, and algorithms that deliver extreme content,” said Sophia Choukas-Bradley, PhD, an associate professor of psychology at the University of Pittsburgh who contributed to both APA reports. “As psychologists, we need to provide teens with digital literacy and skills to combat these design features while simultaneously pushing for policies that require tech companies to change the platforms themselves.”

With legislation now in place, New Jersey’s Department of Education is crafting its detailed information literacy standards, drawing on APA’s Resolution on Combating Misinformation and Promoting Psychological Science Literacy (PDF, 53KB)  in the process. The curriculum will include training on such topics as the scientific method, the difference between primary and secondary sources, how to differentiate fact from opinion, and the ethical production of information (including data ethics).

“When you look at what is in the curriculum, really all of it ultimately ties to psychology,” Nolan said about the New Jersey law.

Progress at the state level is meaningful, but mandates do not necessarily equal action. It can take years for state educational boards to develop and implement curriculum reforms, especially if research has not clearly shown what works.

“It’s one thing to pass a law, but it’s quite another to develop and fund evidence-based professional development programs for teachers, many of whom do not feel up to this task” without further training, Wineburg said.

students having a conversation in front of a laptop

Equipping and empowering youth

Policymakers, educators, librarians, and even journalists are putting their heads together to decide what and how to teach media literacy to kids and teens. But those on the front lines also stress the importance of sound science that can guide the development of interventions from the get-go.

“What happens often in K–12 education is we get separated from the research,” said Kathryn Procope, EdD, executive director at Howard University Middle School of Mathmatics and Science in Washington, D.C. “Getting connected with what the research says can help educators sit down collectively and decide what we’re going to do” when new challenges arise.

DIG offers one solution: its Civic Online Reasoning program, a free curriculum that teaches lateral reading—a fact-checking method where readers evaluate source credibility, such as by searching for background in a separate browser tab. The program also teaches skills such as click restraint, the strategy of looking past the first results suggested by search engines to results from more credible sources.

“Behind lateral reading is the idea that we need to think about online information in a fundamentally different way,” Wineburg said. “Rather than immediately looking at the claim, we want people asking: Who is the person or the organization behind this claim?”

Studies of lateral reading interventions show that they can change the way young people interact with information online. Students who completed six 50-minute lessons in a field study across six Lincoln, Nebraska, high schools were significantly more accurate in assessing source credibility than their peers who did not get the intervention ( Journal of Educational Psychology , Vol. 114, No. 5, 2022 ). In Canada, 2,278 middle and high school students completed the CRTL-F lateral reading program. Beforehand, only 6% could identify the agenda of an advocacy group, but that number rose to 31% after the intervention and to 49% 6 weeks later ( Brodsky, J. E., et al., AERA Open , Vol. 9, 2023; The Digital Media Literacy Gap , CIVIX Canada, 2021 ).

Research conducted in Germany and Italy also found that lateral reading helped news consumers identify false information online, and that pop-up reminders and monetary incentives can increase the practice of lateral reading and click restraint ( Fendt, M., et al., Computers in Human Behavior , Vol. 146, 2023 ; Panizza, F., et al., Scientific Reports , Vol. 12, 2022 ).

Choukas-Bradley is working with the Center for Digital Thriving at Harvard Graduate School of Education and Common Sense Media to develop and evaluate resources that educate adolescents about the social media features designed to keep them online, as well as to teach cognitive and behavioral techniques that promote healthier social media use.

“We listen closely to students and then trace the connections to key evidence-based practices,” said Emily Weinstein, EdD, cofounder of the Center for Digital Thriving, which offers resources codesigned by educators, students, and clinical psychologists.

For example, teens share common thinking traps that are amplified by tech, such as “everyone on social media is happier than me,” or “my friend must be mad if they haven’t responded to my Snap.” Both are examples of cognitive distortions, for which psychologists have a robust evidence base.

“There’s real power in the idea that ‘if you can name it, you can tame it,’ which is one reason we want every student to know about common thinking traps,” Weinstein said.

Educators and researchers are aware of the irony behind adults teaching digital natives how to use platforms with which they are already intimately familiar. For that reason, some are working with kids and teens to teach digital literacy in ways that are meaningful to them.

“Students are far ahead of educators when it comes to using new technologies, so the more that young people are involved in the design of the curriculum that will be used to teach media literacy in 2024 and beyond, the better,” said Chelsea Waite, a principal investigator at the Center on Reinventing Public Education at Arizona State University’s Mary Lou Fulton Teachers College who studies innovative practices at K–12 schools across the United States.

DIG has partnered with Microsoft to integrate information literacy quests that focus on exploring bias and persuasion—for example, when information is trustworthy enough to be shared with others—into the video game Minecraft. Mizuko Ito, PhD, a cultural anthropologist who has studied youth-centered learning for years and directs the Connected Learning Lab at the University of California, Irvine, coleads the Connected Learning Alliance , which fosters partnerships between researchers, developers, and youth to generate new technologies that prioritize connection and well-being rather than profit. One of the organization’s latest projects, Connected Camps , pairs 8- to 13-year-old gamers with college gamers to learn about digital citizenship and to become part of a prosocial online community.

“We know that it’s so much more effective to do online literacy learning and skills development within the context of something youth actually care about, like the gaming universe,” Ito said.

Other youth media organizations are leveraging content young people care about to equip and empower them to create positive online spaces. The This Teenage Life podcast, for example, is a school-based program that teaches kids to produce a podcast while thinking critically about how to engage with today’s digital ecosystem and be a good citizen online.

“As educators, we have to remember that young people nowadays are going to ask: Why am I learning this? It doesn’t have anything to do with what I care about,” Procope said. “That means that we have to do what we’re doing a lot differently.”

[ Related: New approaches to AI in the K-12 classroom ]

From the ground up

The online world has wrought so much change that many experts say education must fundamentally change, too.

“Right now, the approach is to treat information literacy as a patch to put on the whole of the curriculum,” Wineburg said. “But really the challenge, when students are leading digital lives, is to fundamentally rethink the entire curriculum we have.”

That’s a tall order, but a starting point is to interweave digital and media literacy lessons throughout multiple courses rather than treat the subject as a separate entity. For example, a high school biology lesson about vaccines will be more meaningful to students if it acknowledges and addresses the pseudoscientific information they see daily on TikTok, such as the supposed health benefits of castor oil, Wineburg said. Another idea: Students can learn about the strengths and weaknesses of ChatGPT in a history class by asking questions about a historical event where the facts are unclear, such as who fired the first shot in the Battle of Lexington, the first volley in the Revolutionary War.

“Whether it’s debunking pseudoscience on social media or understanding the nuances of AI in history class, every subject offers an opportunity to cultivate these skills,” said Nicole Barnes, PhD, senior director of APA’s Center for Psychology in Schools and Education (CPSE). “After all, we’re not just preparing students for exams but for life in a digital world. This is exactly what we are doing in the CPSE—providing pre-K–12 educators with teaching and learning resources that are grounded in psychological science.”

Several states are aiming for such integration by giving librarians a central role in administering media literacy training throughout schools. The International Society for Technology in Education (ISTE) also recommends a comprehensive approach to K–12 training on technology and online media.

“The people leading these efforts—from national organizations to state legislators—are starting to see this as something that needs to be integrated throughout the entire curriculum,” McNeill said.

The top priority now is to provide states, districts, and schools with packaged materials that have been vetted by peer-reviewed research, Wineburg said. Educators should be wary of for-profit tools that have not been proven effective based on field studies in real classrooms. Still, McNeill said the current wave of digital literacy legislation is progress to be proud of.

“While we still have a lot to learn, we also know that there are risks for youth online,” McNeill said. “We have enough evidence now that there’s plenty of reason to take action.” 

Get the facts

A new book for teens on spotting false information.

True or False?

Further reading

What fact-checkers know about media literacy—and students should, too Terada, Y., Edutopia , May 26, 2022

Teaching lateral reading: Interventions to help people read like fact checkers McGrew, S., Current Opinion in Psychology , 2024

Building media literacy into school curriculums worldwide Leedom, M., News Decoder , Feb. 29, 2024

Teaching digital well-being: Evidence-based resources to help youth thrive Weinstein, E., et al., Center for Digital Thriving, 2023

Fighting fake news in the classroom Pappas, S., Monitor on Psychology , January/February 2022

How to use ChatGPT as a learning tool Abramson, A., Monitor on Psychology , June 2023

Recommended Reading

Six things psychologists are talking about.

The APA Monitor on Psychology ® sister e-newsletter offers fresh articles on psychology trends, new research, and more.

Welcome! Thank you for subscribing.

Speaking of Psychology

Subscribe to APA’s audio podcast series highlighting some of the most important and relevant psychological research being conducted today.

Subscribe to Speaking of Psychology and download via:

Listen to podcast on iTunes

Contact APA

Related and recent.

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

technologies-logo

Article Menu

critical thinking in monitoring and evaluation

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Advancing sustainable cyber-physical system development with a digital twins and language engineering approach: smart greenhouse applications.

critical thinking in monitoring and evaluation

1. Introduction

1.1. brief history of digital twin technology, 1.2. advances in the agriculture sector, 1.3. model-driven engineering as a transformative approach in smart agriculture.

  • An abstracted software system architecture for a digital twin IoT monitoring system that demonstrates the concepts and components of the whole system.
  • A DSL family, or GreenH, for modeling and designing a digital twins-based greenhouse monitoring system. GreenH comprises three DSLs, with each one being responsible for capturing a specific view of the domain: GreenH Design DSL for representing the structural view including IoT smart devices, crop growth, and greenhouse characteristics; GreenH Flow DSL for capturing greenhouse behavior in terms of data flow and control, including data pipelines and data transmitting format and controlling conditions; and GreenH Twin for representing the structural and behavioral views of the smart greenhouses in the virtual environment, thereby forming the corresponding system.
  • Formal definitions of the abstract and concrete syntax and semantics of the proposed languages, using a metamodeling approach and the extended Backus–Naur form (EBNF) notation, respectively.
  • A model transformation engine to deduce GreenH Twin elements from the GreenH Design and GreenH Flow models.
  • An execution engine/model transformation and code generation strategy to support the automation of creating digital twin simulation systems for monitoring smart greenhouses.
  • The specificity of DSLs enables quicker iterations and modifications, which are essential in dynamic IoT environments, unlike LLMs, which may require extensive tuning or many user prompts.
  • DSLs focus on specific functionalities, use available resources more efficiently, and avoid the generality overhead associated with LLMs and generative approaches.
  • DSLs adhere to specific standards within their domain and promote interoperability among systems, which poses challenges for LLMs across different domains.

2. Related Research

2.1. digital twins in smart agriculture, 2.1.1. digital twins in internet of things, 2.1.2. digital twins in controlled environment agriculture, 2.2. languages engineering approach in modeling and simulation systems, 2.3. domain-specific languages and the agriculture sector, 3. methodology, 3.1. domain analysis, 3.1.1. temperate fruits development and growth, 3.1.2. types of greenhouses, 3.1.3. sensors: types, number and locations.

  • N is the number of sensors. Each sensor is located in a single zone for microclimate controlling and monitoring tasks.
  • A is the measurable floor area of the greenhouse.
  • C is a constant to represent the nominal coverage area for a sensor. Then, a coverage function c : R → { 0,1 } can be defined where:   1 ,                                       w i t h i n   t h e   s e n s o r   r a n g e   0 ,                                       o u t   o f   t h e   s e n s o r   r a n g e
  • Let H be the height of the greenhouse; for some shapes, we consider the height in the middle, where:
  • Let K be the shape coefficient constant that adjusts for the complexity of the greenhouse’s shape and internal structures. If S is the set of all possible shapes ( σ ) with the same floor area ( σ ∈ S ), then K can be seen as: K = K ( σ ) , where: S i m p l e   s h a p e ,                                               K = 1.0     M u l t i   s p a n ,                                                     K = 1.2   C u r v e d ,                                                                 K = 0.9     Q u o n s e t ,                                                             K = 1.1   G a b l e   r o o f ,                                                       K = 1.2   G o t h i c   a r c h ,                                                   K = 1.2  

3.1.4. Simulation and Visualization

3.2. definition of requirements of domain specific languages.

A snapshot of GreenH Flow to show a humidity monitoring and control process.

4. Design Principles of the GreenH Language Syntax

4.1. aspect orientation and separation of concerns, 4.2. integration functionality of greenh language.

A snapshot of GreenH Design to show an example of specifying a configuration.
A snapshot of a corresponding GreenH Flow to show the predefined context that creates a smart control rule.

4.3. Extendibility Features of GreenH

A snapshot of GreenH Design EBNF definition to extend the language with new features.
A snapshot of GreenH Flow to show the controlling rule corresponds to the extension.

4.4. Testability Features of the Language Design

5. definition of greenh language syntax, 5.1. formalizing greenh concrete syntax, 5.1.1. formalizing the concrete syntax of greenh design dsl.

  • Greenhouse Design : a concept of the language that expresses the syntax and constructs of the overall structure of a greenhouse design, and that encompasses its shape, dimensions, sensors, actuators, and growth journey details as enclosed in a specific format. This can be formally defined using EBNF notation as:
  • Greenhouse Shape : a concept of the language that specifies the geometric shape of the greenhouse, which is a critical factor in determining the required number of sensors and their positions inside the greenhouse area. The common types of greenhouse shapes, such as square, rectangle, and dome, are captured using ShapeType to provide more flexibility in design options. This can be formally defined using EBNF notation as:
  • Dimensions : a concept of the language that describes the syntax and constructs the dimensions of the greenhouse, which are fundamental for planning the size and area along with its shape. This can be formally defined using EBNF notation as:
  • Environmental Sensors : a concept of the language for describing the syntax and constructs of special types of sensors, installed in specific locations, and their tasks, which are utilized for monitoring environmental conditions inside the greenhouse, such as humidity, temperature, and soil moisture. Initial readings can be determined to enable monitoring and control of the smart system according to an acceptable range. This can be formally defined using EBNF notation as:
  • Actuator : a concept of the language that expresses the syntax and constructs of actuators that can act on various greenhouse components that are placed in specific locations. It can carry out actions in response to rules for actively managing the environment inside the greenhouse. The actuator can be specialized into different types of actuators for specific greenhouse systems, such as heating and lighting systems using the ActuatorType element. This can be formally defined using EBNF notation as:
  • Growth Manager : a concept of the language for expressing the growth and development stages of a specific type of crop planted inside the greenhouse. As explained in Section 3.1.1 , the temperature required by temperate fruits vary for each stage of growth development. The details of each growth stage, including its purpose, duration, and optimal environmental conditions, are captured in the language using the Stage element. This enables the language that describes the growth journey in detail, which can be defined formally using EBNF notation as:

5.1.2. Formalizing the Concrete Syntax of GreenH Flow DSL

  • Greenhouse flow : a concept of the flow language used to express syntax and construct the overall data flow behavior of the greenhouse smart system. It is worth mentioning that the greenhouse components that are defined previously using GreenH Design language can be referenced in this model via design_source element. These can be formally defined using EBNF notation as:
  • Data Pipes: a concept of the flow language used to express syntax and construct the data flow within the system, defining how data are taken in from sensors, processed, and output. It contains one or more Pipe elements, each specifying a single data pipeline within the system, including the input source of data, the filter applied, and the directed control system. This can be formally defined using EBNF notation as:
  • Data Filters : a concept of the flow language used to express the syntax and constructs of filters used to process data within pipes. It comprises of multiple definitions of two types of elements, namely DataFilter and ControlRule . Each filter has a type (e.g., threshold, range) and an associated growth development stage. On the other hand, the rule element describes the condition(s) that dictate how data are filtered and action triggering based on data conditions. This can be formally defined using EBNF notation as:
  • Actions: a concept of the flow language used to describe the syntax and constructs of the collection of actions. Each Action describes a specific operation to be performed by the system in a particular duration based on data conditions and decision-making logic, expressed in a relevant DataFilter element. This can be formally defined using EBNF notation as:

5.1.3. Formalizing the Concrete Syntax of GreenH Twin DSL

  • Virtual World: an element of the digital twins language used to bind the virtual representations of all greenhouse structural and behavioral components corresponding to real-world ones expressed in GreenH Design and GreenH Flow DSLs. This can be formally defined using EBNF notation as:
  • Sensor Instance: an element of the digital twins language used to represent a virtual entity from a corresponding one that is expressed in GreenH Design language. This can be formally defined using EBNF notation as:
  • Simulation : an element of the digital twins language used to create simulations for monitoring/testing scenarios. It provides the capability to monitor the system behavior in various real-world scenarios based on real data. The Simulation element comprises the simulation environment settings and scenarios. This can be formally defined using EBNF notation as:
  • Agricultural Scenario : a concept of the language that acts as a contextual container to group a series of processes that relate to the simulation. It can be utilized to define a specific set of circumstances under which the contained processes will operate. Each process includes a name, a condition under which the process should be executed, and an action to be performed if the condition is met. This can be formally defined using EBNF notation as:
  • Action : a concept of the language that describes the action to trigger when the condition of a process is met. The details include the name and the command to execute. This can be formally defined using EBNF notation as:

6. Use Case: Digital Twin Internet of Things Temperature Monitoring System

Representing a simple monitoring system using geenh dsls.

A snapshot of GreenH Design to represent the structure of the greenhouse.
A snapshot of GreenH Flow to represent the data flow and control in the greenhouse.
A snapshot of the translated GreenH Twin representing the corresponding digital twin simulation system within the greenhouse.

7. Language Evaluation Strategy

7.1. expert participant characteristics, 7.2. evaluation strategy and criteria, 7.3. results discussion and final remarks, 7.4. greenh verification using model checking.

The GreenH Flow DSL example used for verification.
  • When the temperature is above 26 degrees and the cooling level is not already high, the system transitions the cooling level to high.
  • When the temperature falls to 22 degrees and lower and the cooling level is not already low, the system transitions the cooling level to low.
  • Cooling system actions are gradually phased out by decrementing the timer and turning off the system when the timer reaches 1.
  • When the temperature exceeds 26 degrees, the cooling system will eventually be set to high.
  • When the temperature falls to 22 degrees or lower, the cooling system will eventually be set to low.

8. Conclusions

Institutional review board statement, informed consent statement, data availability statement, conflicts of interest.

  • Attaran, M.; Celik, B.G. Digital Twin: Benefits, use cases, challenges, and opportunities. Decis. Anal. J. 2023 , 6 , 100165. [ Google Scholar ] [ CrossRef ]
  • Anshari, M.; Hamdan, M. Enhancing e-government with a digital twin for innovation management. J. Sci. Technol. Policy Manag. 2023 , 14 , 1055–1065. [ Google Scholar ] [ CrossRef ]
  • Armeni, P.; Polat, I.; De Rossi, L.M.; Diaferia, L.; Meregalli, S.; Gatti, A. Digital twins in healthcare: Is it the beginning of a new era of evidence-based medicine? A critical review. J. Pers. Med. 2022 , 12 , 1255. [ Google Scholar ] [ CrossRef ]
  • Singh, M.; Fuenmayor, E.; Hinchy, E.P.; Qiao, Y.; Murray, N.; Devine, D. Digital Twin: Origin to Future. Appl. Syst. Innov. 2021 , 4 , 36. [ Google Scholar ] [ CrossRef ]
  • Costello, K.; Omale, G. Gartner Survey Reveals Digital Twins Are Entering Mainstream Use ; Gartner Inc.: Stamford, CT, USA, 2019; Available online: https://www.gartner.com/en/newsroom/press-releases/2019-02-20-gartner-survey-reveals-digital-twins-are-entering-mai (accessed on 22 February 2024).
  • Rayhana, R.; Xiao, G.; Liu, Z. Internet of things empowered smart greenhouse farming. IEEE J. Radio Freq. Identif. 2020 , 4 , 195–211. [ Google Scholar ] [ CrossRef ]
  • Soussi, A.; Zero, E.; Sacile, R.; Trinchero, D.; Fossa, M. Smart Sensors and Smart Data for Precision Agriculture: A Review. Sensors 2024 , 24 , 2647. [ Google Scholar ] [ CrossRef ]
  • Bucchiarone, A.; Cabot, J.; Paige, R.F.; Pierantonio, A. Grand challenges in model-driven engineering: An analysis of the state of the research. Softw. Syst. Model. 2020 , 19 , 5–13. [ Google Scholar ] [ CrossRef ]
  • Kamburjan, E.; Sieve, R.; Prabhu, C.; Amato, M.; Barmina, G.; Occhipinti, E.; Johnsen, E.B. GreenhouseDT: An Exemplar for Digital Twins. In Proceedings of the SEAMS 2024, Lisbon, Portugal, 15–16 April 2024. [ Google Scholar ]
  • Barricelli, B.R.; Casiraghi, E.; Fogli, D. A survey on digital twin: Definitions, characteristics, applications, and design implications. IEEE Access 2019 , 7 , 167653–167671. [ Google Scholar ] [ CrossRef ]
  • Li, M.; Wang, R.; Zhou, X.; Zhu, Z.; Wen, Y.; Tan, R. ChatTwin: Toward automated digital twin generation for data center via large language models. In Proceedings of the 10th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation, Istanbul, Turkey, 15–16 November 2023; pp. 208–211. [ Google Scholar ]
  • Sun, Y.; Zhang, Q.; Bao, J.; Lu, Y.; Liu, S. Empowering digital twins with large language models for global temporal feature learning. J. Manuf. Syst. 2024 , 74 , 83–99. [ Google Scholar ] [ CrossRef ]
  • Björnsson, B.; Borrebaeck, C.; Elander, N.; Gasslander, T.; Gawel, D.R.; Gustafsson, M.; Jörnsten, R.; Lee, E.J.; Li, X.; Lilja, S.; et al. Digital twins to personalize medicine. Genome Med. 2020 , 12 , 4. [ Google Scholar ] [ CrossRef ]
  • Bersani, C.; Ruggiero, C.; Sacile, R.; Soussi, A.; Zero, E. Internet of Things Approaches for Monitoring and Control of Smart Greenhouses in Industry 4.0. Energies 2022 , 15 , 3834. [ Google Scholar ] [ CrossRef ]
  • Tripathy, P.K.; Tripathy, A.K.; Agarwal, A.; Mohanty, S.P. MyGreen: An IoT-enabled smart greenhouse for sustainable agriculture. IEEE Consum. Electron. Mag. 2021 , 10 , 57–62. [ Google Scholar ] [ CrossRef ]
  • Subahi, A.F.; Bouazza, K.E. An intelligent IoT-based system design for controlling and monitoring greenhouse temperature. IEEE Access 2020 , 8 , 125488–125500. [ Google Scholar ] [ CrossRef ]
  • Arora, N.K. Agricultural sustainability and food security. Environ. Sustain. 2018 , 1 , 217–219. [ Google Scholar ] [ CrossRef ]
  • Fei, X.; Xiao-Long, W.; Yong, X. Development of energy saving and rapid temperature control technology for intelligent greenhouses. IEEE Access 2021 , 9 , 29677–29685. [ Google Scholar ] [ CrossRef ]
  • Sengupta, A.; Debnath, B.; Das, A.; De, D. FarmFox: A quad-sensor-based IoT box for precision agriculture. IEEE Consum. Electron. Mag. 2021 , 10 , 63–68. [ Google Scholar ] [ CrossRef ]
  • Pandey, C.; Sethy, P.K.; Behera, S.K.; Vishwakarma, J.; Tande, V. Smart agriculture: Technological advancements on agriculture—A systematical review. Deep. Learn. Sustain. Agric. 2022 , 1 , 1–56. [ Google Scholar ]
  • Verbruggen, C.; Snoeck, M. Practitioners’ experiences with model-driven engineering: A meta-review. Softw. Syst. Model. 2023 , 22 , 111–130. [ Google Scholar ] [ CrossRef ]
  • Peterson, T.A. Systems engineering: Transforming digital transformation. In Proceedings of the INCOSE International Symposium, Orlando, FL, USA, 20–25 July 2019. [ Google Scholar ]
  • Govindasamy, H.S.; Jayaraman, R.; Taspinar, B.; Lehner, D.; Wimmer, M. Air quality management: An exemplar for model-driven digital twin engineering. In Proceedings of the 2021 ACM/IEEE International Conference on Model Driven Engineering Languages and Systems Companion (MODELS-C), Fukuoka, Japan, 10–15 October 2021. [ Google Scholar ]
  • Bordeleau, F.; Combemale, B.; Eramo, R.; Van Den Brand, M.; Wimmer, M. Towards model-driven digital twin engineering: Current opportunities and future challenges. In Proceedings of the Systems Modelling and Management: First International Conference, ICSMM 2020, Bergen, Norway, 25–26 June 2020. [ Google Scholar ]
  • Palchunov, D.; Vaganova, A. Methods for Developing Digital Twins of Roles Based on Semantic Domain-Specific Languages. In Proceedings of the 2021 IEEE 22nd International Conference of Young Professionals in Electron Devices and Materials (EDM), Souzga, Russia, 30 June–4 July 2021; pp. 515–519. [ Google Scholar ]
  • Hernández-Morales, C.A.; Luna-Rivera, J.M.; Perez-Jimenez, R. Design and deployment of a practical IoT-based monitoring system for protected cultivations. Comput. Commun. 2022 , 186 , 51–64. [ Google Scholar ] [ CrossRef ]
  • Yang, J.; Liu, M.; Lu, J.; Miao, Y.; Hossain, M.A.; Alhamid, M.F. Botanical internet of things: Toward smart indoor farming by connecting people, plant, data and clouds. Mob. Netw. Appl. 2018 , 23 , 188–202. [ Google Scholar ] [ CrossRef ]
  • Khoa, T.A.; Man, M.M.; Nguyen, T.Y.; Nguyen, V.; Nam, N.H. Smart agriculture using IoT multi-sensors: A novel watering management system. J. Sens. Actuator Netw. 2019 , 8 , 45. [ Google Scholar ] [ CrossRef ]
  • Aafreen, R.; Neyaz, S.Y.; Shamim, R.; Beg, M.S. An IoT based system for telemetry and control of Greenhouse environment. In Proceedings of the 2019 International Conference on Electrical, Electronics and Computer Engineering (UPCON), Aligarh, India, 8–10 November 2019. [ Google Scholar ]
  • Alves, R.G.; Souza, G.; Maia, R.F.; Tran, A.L.H.; Kamienski, C.; Soininen, J.P.; Aquino, P.T.; Lima, F. A digital twin for smart farming. In Proceedings of the 2019 IEEE Global Humanitarian Technology Conference (GHTC), Seattle, WA, USA, 17–20 October 2019. [ Google Scholar ]
  • Verdouw, C.; Tekinerdogan, B.; Beulens, A.; Wolfert, S. Digital twins in smart farming. Agric. Syst. 2021 , 189 , 103046. [ Google Scholar ] [ CrossRef ]
  • Angin, P.; Anisi, M.H.; Göksel, F.; Gürsoy, C.; Büyükgülcü, A. AgriLoRa: A digital twin framework for smart agriculture. J. Wirel. Mob. Netw. Ubiquitous Comput. Dependable Appl. 2020 , 11 , 77–96. [ Google Scholar ]
  • Chaux, J.D.; Sanchez-Londono, D.; Barbieri, G. A digital twin architecture to optimize productivity within controlled environment agriculture. Appl. Sci. 2021 , 11 , 8875. [ Google Scholar ] [ CrossRef ]
  • González, J.P.; Sanchez-Londoño, D.; Barbieri, G. A Monitoring Digital Twin for Services of Controlled Environment Agriculture. IFAC-PapersOnLine 2022 , 55 , 85–90. [ Google Scholar ] [ CrossRef ]
  • Durão, L.F.C.; Haag, S.; Anderl, R.; Schützer, K.; Zancul, E. Digital twin requirements in the context of industry 4.0. In Product Lifecycle Management to Support Industry 4.0, Proceedings of the 15th IFIP WG 5.1 International Conference (PLM 2018), Turin, Italy, 2–4 July 2018 ; Springer: Cham, Switzerland, 2018. [ Google Scholar ]
  • Ewald, R.; Uhrmacher, A.M. SESSL: A domain-specific language for simulation experiments. ACM Trans. Model. Comput. Simul. (TOMACS) 2014 , 24 , 1–25. [ Google Scholar ] [ CrossRef ]
  • Miller, J.A.; Han, J.; Hybinette, M. Using domain specific language for modeling and simulation: Scalation as a case study. In Proceedings of the 2010 Winter Simulation Conference, Baltimore, MD, USA, 5–8 December 2010. [ Google Scholar ]
  • Dhouib, S.; Kchir, S.; Stinckwich, S.; Ziadi, T.; Ziane, M. A domain-specific language to design, simulate and deploy robotic applications. In Proceedings of the Simulation, Modeling, and Programming for Autonomous Robots: Third International Conference (SIMPAR 2012), Tsukuba, Japan, 5–8 November 2012. [ Google Scholar ]
  • Barriga, J.A.; Clemente, P.J.; Sosa-Sánchez, E.; Prieto, Á.E. SimulateIoT: Domain Specific Language to design, code generation and execute IoT simulation environments. IEEE Access 2021 , 9 , 92531–92552. [ Google Scholar ] [ CrossRef ]
  • Barriga, J.A.; Clemente, P.J.; Hernández, J.; Pérez-Toledano, M.A. SimulateIoT-FIWARE: Domain specific language to design, code generation and execute IoT simulation environments on FIWARE. IEEE Access 2022 , 10 , 7800–7822. [ Google Scholar ] [ CrossRef ]
  • Franceschini, R.; Bisgambiglia, P.A.; Bisgambiglia, P.; Hill, D. DEVS-Ruby: A Domain Specific Language for DEVS Modeling and Simulation. In Proceedings of the 2014 Symposium on Theory of Modeling & Simulation, Tampa, FL, USA, 13–16 April 2014. [ Google Scholar ]
  • Groeneveld, D.; Tekinerdogan, B.; Garousi, V.; Catal, C. A domain-specific language framework for farm management information systems in precision agriculture. Precis. Agric. 2021 , 22 , 1067–1106. [ Google Scholar ] [ CrossRef ]
  • Kawtrakul, A. Ontology engineering and knowledge services for agriculture domain. J. Integr. Agric. 2012 , 11 , 741–751. [ Google Scholar ] [ CrossRef ]
  • Ceh, I.; Crepinšek, M.; Kosar, T.; Mernik, M. Ontology driven development of domain-specific languages. Comput. Sci. Inf. Syst. 2011 , 8 , 317–343. [ Google Scholar ] [ CrossRef ]
  • Lamy, J.B. Owlready: Ontology-oriented programming in Python with automatic classification and high level constructs for biomedical ontologies. Artif. Intell. Med. 2017 , 80 , 11–28. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Allocca, C.; d’Aquin, M.; Motta, E. Towards a formalization of ontology relations in the context of ontology repositories. In Knowledge Discovery, Knowledge Engineering and Knowledge Management, Proceedings of the First International Joint Conference (IC3K 2009), Funchal, Portugal, 6–8 October 2009 ; Springer: Berlin/Heidelberg, Germany, 2009. [ Google Scholar ]
  • Mani, P.; Thirumalai Natesan, V. Experimental investigation of drying characteristics of lima beans with passive and active mode greenhouse solar dryers. J. Food Process Eng. 2021 , 44 , e13667. [ Google Scholar ] [ CrossRef ]
  • Baglivo, C.; Mazzeo, D.; Panico, S.; Bonuso, S.; Matera, N.; Congedo, P.M.; Oliveti, G. Complete greenhouse dynamic simulation tool to assess the crop thermal well-being and energy needs. Appl. Therm. Eng. 2020 , 179 , 115698. [ Google Scholar ] [ CrossRef ]
  • Peña-Fernández, A.; Colón-Reynoso, M.A.; Mazuela, P. Geometric analysis of greenhouse roofs for energy efficiency optimization and condensation drip reduction. Agriculture 2024 , 14 , 216. [ Google Scholar ] [ CrossRef ]
  • Wangkahart, S.; Junsiri, C.; Srichat, A.; Poojeera, S.; Laloon, K.; Hongtong, K.; Boupha, P. Using Greenhouse Modelling to Identify the Optimal Conditions for Growing Crops in Northeastern Thailand. Math. Model. Eng. Probl. 2022 , 9 , 1648–1658. [ Google Scholar ] [ CrossRef ]
  • Łysiak, G.P.; Szot, I. The use of temperature based indices for estimation of fruit production conditions and risks in temperate climates. Agriculture 2023 , 13 , 960. [ Google Scholar ] [ CrossRef ]
  • Lata, S.; Verma, H.K. Selection of number and locations of temperature and luminosity sensors in intelligent greenhouse. Int. J. Appl. Res. 2018 , 13 , 10965–10971. [ Google Scholar ]
  • Lee, S.Y.; Lee, I.B.; Yeo, U.H.; Kim, R.W.; Kim, J.G. Optimal sensor placement for monitoring and controlling greenhouse internal environments. Biosyst. Eng. 2019 , 188 , 190–206. [ Google Scholar ] [ CrossRef ]
  • Ajani, O.S.; Aboyeji, E.; Mallipeddi, R.; Uyeh, D.D.; Ha, Y.; Park, T. A genetic programming-based optimal sensor placement for greenhouse monitoring and control. Front. Plant Sci. 2023 , 14 , 1152036. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Wagg, D.J.; Worden, K.; Barthorpe, R.J.; Gardner, P. Digital twins: State-of-the-art and future directions for modeling and simulation in engineering dynamics applications. ASCE-ASME J. Risk Uncertain. Eng. Syst. Part B Mech. Eng. 2020 , 6 , 030901. [ Google Scholar ] [ CrossRef ]
  • Agalianos, K.; Ponis, S.T.; Aretoulaki, E.; Plakas, G.; Efthymiou, O. Discrete event simulation and digital twins: Review and challenges for logistics. Procedia Manuf. 2020 , 51 , 1636–1677. [ Google Scholar ] [ CrossRef ]
  • Jahić, B.; Guelfi, N.; Ries, B. SEMKIS-DSL: A domain-specific language to support requirements engineering of datasets and neural network recognition. Information 2023 , 14 , 213. [ Google Scholar ] [ CrossRef ]
  • Han, Z.; Qazi, S.; Werner, M.; Devarajegowda, K.; Ecker, W. On Self-Verifying DSL Generation for Embedded Systems Automation. In Proceedings of the 24th MBMV Workshop 2021, Virtual Event, Germany, 18–19 March 2021. [ Google Scholar ]
  • Amdah, L.; Anwar, A. A DSL for collaborative business process. In Proceedings of the 2020 International Conference on Intelligent Systems and Computer Vision (ISCV), Fez, Morocco, 9–11 June 2020. [ Google Scholar ]
  • Huisman, M.; Wijs, A. Model Checking Algorithms. In Concise Guide to Software Verification: From Model Checking to Annotation Checking ; Springer International Publishing: Cham, Switzerland, 2023; pp. 79–106. [ Google Scholar ]
  • Pollak, D.; Layka, V.; Sacco, A. Beginning Scala 3: A Functional and Object-Oriented Java Language , 3rd ed.; Apress: Berkeley, CA, USA, 2022; pp. 237–245. [ Google Scholar ]
  • Wang, B.; Wang, Z.; Wang, X.; Cao, Y.; Saurous, R.A.; Kim, Y. Grammar prompting for domain-specific language generation with large language models. Adv. Neural Inf. Process. Syst. 2024 , 36 , 1–26. [ Google Scholar ]
  • Poltronieri, I.; Zorzo, A.F.; Bernardino, M.; de Borba Campos, M. Usa-DSL: Usability evaluation framework for domain-specific languages. In Proceedings of the 33rd Annual ACM Symposium on Applied Computing 2018, Pau, France, 9–13 April 2018. [ Google Scholar ]
  • Nielsen, J. Usability inspection methods. In Proceedings of the Conference Companion on Human Factors in Computing Systems, Boston, MA, USA, 24–28 April 1994. [ Google Scholar ]
  • Kahraman, G.; Bilgen, S. A framework for qualitative assessment of domain-specific languages. Softw. Syst. Model. 2015 , 14 , 1505–1531. [ Google Scholar ] [ CrossRef ]
  • Alaca, O.F.; Tezel, B.T.; Challenger, M.; Goulão, M.; Amaral, V.; Kardas, G. AgentDSM-Eval: A framework for the evaluation of domain-specific modeling languages for multi-agent systems. Comput. Stand. Interfaces 2021 , 76 , 103513. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

Ref.ParametersIoT Systems
[ ]Temperature, humidity, light intensity, pH value, and CO levelTemperature, light control, air pollution, and soil moisture monitoring
[ ]Temperature and energyTemperature control (PID)
[ ]Temperature and energyTemperature control (Fuzzy logic)
[ ]temperature and humidityTemperature control, humidity monitoring
[ ]Temperature, humidity, luminosity, and CO Temperature control, humidity, luminosity, and CO monitoring
[ ]Soil moisture, humidity, and temperatureWatering management system
[ ]CO levels, light intensity, and humiditySoil irrigation system and more
Ref.DSLPurposeLanguageApproachType
[ ]SESSLWriting simulation experimentsScalaLEEmbedded
[ ]ScalaTionWriting clear, concise, and intuitive simulation programsScalaLEEmbedded
[ ]RobotMLDesigning, simulating, and deploying robotic applicationsDSLMDEMetamodel
[ ]SimulateIoTDesigning simulation environments for IoT systemsDSLMDEMetamodel
[ ]SimulateIoT-FIWAREDesigning simulation environments for IoT systems for FIWARE platform.DSLMDEMetamodel
[ ]DEVS-RubyModelling and simulation of discrete event system specification (DEVS)DSLMDEFinite StateAutomata
Stage No.AppleStrawberry
1Purpose: Dormancy and Bud Break; Temp: below 7 °C; Period: 3–4 monthsPurpose: Planting and Root Development; Temp: 18 Day–12 Night °C; Period: 1–2 months
2Purpose: Growth of Leaf and Floral Initiation; Temp: 21–24 °C; Period: 1–2 monthsPurpose: Vegetative Growth; Tempe: 25 Day–12 Night °C; Period: 2–3 months
3Purpose: Fruit Development and Growth; Temp: 22 Day–12 Night °C; Period: 30–40 daysPurpose: Flowering and Fruiting; Temp: 25 Day–12 Night °C; Period: 3–4 months
4Purpose: Late Fruit Maturation and Harvest; Temp: 22 Day–12 Night °C; Period: 3–4 months-
IoT SystemParameter to MeasureMeasurement Unit
Temperature change parameterInternal heating temperatureCelsius
Humidity change parameterInternal humidity levelPercentage
Light change parameterThe intensity of light on surfaceLux
Energy consumption parameterElectrical power consumed by the system.Watt
Communication parameterData transmission between IoT componentsBits per second
IDCriteriaDescription
C1ExpressivenessAbility to model all necessary domain concepts accurately and completely.
C2ReadabilityClarity of the language syntax.
C3UsabilityEase of use and learning for new users.
C4ConsistencyUniformity in language constructs and their usage.
C5CorrectnessEnforcement of domain constraints to avoid invalid models.
C6ScalabilityAbility to handle different sizes and complexities of models.
IDScore (1)Score (2)Score (3)Score
(4)
Score
(5)
Score
(6)
Score
(7)
Average ScoreSummary of Expert Notes
C144433.5433.64Missing constructs for advanced scenarios in GreenH Design, such as network connection protocols and other types of smart devices.
C245444333.86DSLs syntax is clear for target users in GreenH Design and Flow DSLs; further simplification is recommended in the Twin DSL.
C34343432.53.00Difficulties might appear with more complex simulation scenarios. Enabling the execution engine to generate more boilerplate code is required or consideration of a template-based approach for code generation.
C455445434.29DSLs constructs are consistent and each DSL focuses on a particular aspect of the language.
C54443.53333.50The language is defined and restricted to a metamodel and EBNF. Further correctness tests are required in later stages of development.
C653.54443.544The DSLs can be scalable; advanced experiments are required
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Subahi, A.F. Advancing Sustainable Cyber-Physical System Development with a Digital Twins and Language Engineering Approach: Smart Greenhouse Applications. Technologies 2024 , 12 , 147. https://doi.org/10.3390/technologies12090147

Subahi AF. Advancing Sustainable Cyber-Physical System Development with a Digital Twins and Language Engineering Approach: Smart Greenhouse Applications. Technologies . 2024; 12(9):147. https://doi.org/10.3390/technologies12090147

Subahi, Ahmad F. 2024. "Advancing Sustainable Cyber-Physical System Development with a Digital Twins and Language Engineering Approach: Smart Greenhouse Applications" Technologies 12, no. 9: 147. https://doi.org/10.3390/technologies12090147

Article Metrics

Further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

Daily PM2.5 concentration prediction based on variational modal decomposition and deep learning for multi-site temporal and spatial fusion of meteorological factors

  • Published: 29 August 2024
  • Volume 196 , article number  859 , ( 2024 )

Cite this article

critical thinking in monitoring and evaluation

  • Xinrong Xie 1 ,
  • Zhaocai Wang   ORCID: orcid.org/0000-0003-1396-6835 1 ,
  • Manli Xu 1 &
  • Nannan Xu 1  

Air pollution, particularly PM2.5, has long been a critical concern for the atmospheric environment. Accurately predicting daily PM2.5 concentrations is crucial for both environmental protection and public health. This study introduces a new hybrid model within the “Decomposition-Prediction-Integration” (DPI) framework, which combines variational modal decomposition (VMD), causal convolutional neural network (CNN), bidirectional long short-term memory (BiLSTM), and attention mechanism (AM), named as VCBA, for spatio-temporal fusion of multi-site data to forecast daily PM2.5 concentrations in a city. The approach involves integrating air quality data from the target site with data from neighboring sites, applying mathematical techniques for dimensionality reduction, decomposing PM2.5 concentration data using VMD, and utilizing Causal CNN and BiLSTM models with an attention mechanism to enhance performance. The final prediction results are obtained through linear aggregation. Experimental results demonstrate that the VCBA model performs exceptionally well in predicting daily PM2.5 concentrations at various stations in Taiyuan City, Shanxi Province, China. Evaluation metrics such as RMSE, MAE, and R 2 are reported as 2.556, 1.998, and 0.973, respectively. Compared to traditional methods, this approach offers higher prediction accuracy and stronger spatio-temporal modeling capabilities, providing an effective solution for accurate PM2.5 daily concentration prediction.

This is a preview of subscription content, log in via an institution to check access.

Access this article

We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

critical thinking in monitoring and evaluation

Similar content being viewed by others

critical thinking in monitoring and evaluation

A hybrid deep learning technology for PM 2.5 air quality forecasting

critical thinking in monitoring and evaluation

A hybrid deep learning model with multi-source data for PM 2.5 concentration forecast

critical thinking in monitoring and evaluation

Air quality forecasting using a spatiotemporal hybrid deep learning model based on VMD–GAT–BiLSTM

Explore related subjects.

  • Artificial Intelligence
  • Environmental Chemistry

Data availability

No datasets were generated or analysed during the current study.

Agarwal, A., & Sahu, M. (2023). Forecasting PM(2.5) concentrations using statistical modeling for Bengaluru and Delhi regions. Environmental Monitoring and Assessment, 195 (4), 502.

Article   Google Scholar  

Ali, M., Khan, D. M., Alshanbari, H. M., & El-Bagoury, A.A.-A.H. (2023). Prediction of complex stock market data using an improved hybrid EMD-LSTM model. Applied Sciences, 13 (3), 1429.

Article   CAS   Google Scholar  

Chen, G., Li, S., Knibbs, L. D., Hamm, N. A. S., Cao, W., & Li, T. (2018a). A machine learning method to estimate PM2.5 concentrations across China with remote sensing, meteorological and land use information. Science of the Total Environment, 636 , 52–60.

Chen, Z.-Y., Zhang, T.-H., Zhang, R., Zhu, Z.-M., Ou, C.-Q., & Guo, Y. (2018b). Estimating PM2.5 concentrations based on non-linear exposure-lag-response associations with aerosol optical depth and meteorological measures. Atmospheric Environment, 173 , 30–37.

Chen, Y., Huang, L., Xie, X., Liu, Z., & Hu, J. (2024). Improved prediction of hourly PM(2.5) concentrations with a long short-term memory and spatio-temporal causal convolutional network deep learning model. Science of the Total Environment, 912 , 168672.

Cheng, Y., Zhang, H., Liu, Z., Chen, L., & Wang, P. (2019). Hybrid algorithm for short-term forecasting of PM2.5 in China. Atmospheric Environment, 200 , 264–279.

Cui, X., Wang, Z., Xu, N., Wu, J., & Yao, Z. (2024). A secondary modal decomposition ensemble deep learning model for groundwater level prediction using multi-data. Environmental Modelling and Software, 175 , 105969.

Darrow, L. A., Klein, M., Sarnat, J. A., Mulholland, J. A., Strickland, M. J., & Sarnat, S. E. (2011). The use of alternative pollutant metrics in time-series studies of ambient air pollution and respiratory emergency department visits. Journal of Exposure Science and Environmental Epidemiology, 21 (1), 10–19.

Deng, Q., Yang, K., & Luo, Y. (2017). Spatiotemporal patterns of PM2.5 in the Beijing–Tianjin–Hebei region during 2013–2016. Geology, Ecology, and Landscapes, 1 (2), 95–103.

Deng, Z., Qi, X., Xu, T., & Zheng, Y. (2022). Operational scheduling of behind-the-meter storage systems based on multiple nonstationary decomposition and deep convolutional neural network for price forecasting. Computational Intelligence and Neuroscience, 2022 , 9326856.

Dong, J., Wang, Z., Wu, J., Cui, X., & Pei, R. (2024). A novel runoff prediction model based on support vector machine and gate recurrent unit with secondary mode decomposition. Water Resources Management, 38 (3), 1655–1674.

Dragomiretskiy, K., & Zosso, D. (2014). Variational mode decomposition. IEEE Transactions on Signal Processing, 62 (3), 531–544.

Du, M., Chen, Y., Liu, Y., & Yin, H. (2022). A novel hybrid method to predict PM2.5 concentration based on the SWT-QPSO-LSTM hybrid model. Computational Intelligence and Neuroscience, 2022 , 7207477.

Gao, Z., Do, K., Li, Z., Jiang, X., Maji, K. J., Ivey, C. E., & Russell, A. G. (2024). Predicting PM2.5 levels and exceedance days using machine learning methods. Atmospheric Environment, 323 , 120396.

Huang, Q.-X., Wicke, M., Adams, B., & Guibas, L. (2009). Shape decomposition using modal analysis. Computer Graphics Forum, 28 (2), 407–416.

Huang, N., Wu, Y., Cai, G., Zhu, H., Yu, C., & Jiang, L. (2019). Short-term wind speed forecast with low loss of information based on feature generation of OSVD. IEEE Access, 7 , 81027–81046.

Jeon, J. (2015). The strengths and limitations of the statistical modeling of complex social phenomenon: Focusing on SEM, path analysis, or multiple regression models. International Journal of Economics and Management Engineering, 9 (5), 1634–1642.

Google Scholar  

Jiang, L., He, S., & Zhou, H. (2020). Spatio-temporal characteristics and convergence trends of PM2.5 pollution: A case study of cities of air pollution transmission channel in Beijing-Tianjin-Hebei region, China. Journal of Cleaner Production, 256 , 120631.

Jiang, Z., Che, J., & Wang, L. (2021). Ultra-short-term wind speed forecasting based on EMD-VAR model and spatial correlation. Energy Conversion and Management, 250 , 114919.

Kim, H. S., Park, I., Song, C. H., Lee, K., Yun, J. W., & Kim, H. K. (2019). Development of a daily PM10 and PM2.5 prediction system using a deep long short-term memory neural network model. Atmospheric Chemistry and Physics, 19 (20), 12935–12951.

Lian, J., Liu, Z., Wang, H., & Dong, X. (2018). Adaptive variational mode decomposition method for signal processing based on mode characteristic. Mechanical Systems and Signal Processing, 107 , 53–77.

Lin, J., & Ngiam, K. Y. (2023). How data science and AI-based technologies impact genomics. Singapore Medical Journal, 64 (1), 59–66.

Liu, Z., Ge, C., Zheng, K., Bao, S., Cui, Y., Yuan, Y., & Zhang, Y. (2024). Forecasting daily PM2.5 concentrations in Wuhan with a spatial-autocorrelation-based long short-term memory model. Atmospheric Environment, 331 , 120605.

Lyu, C., Zhao, P., Xie, J., Dong, S., Liu, J., Rao, C., & Fu, J. (2021). Electrospinning of nanofibrous membrane and its applications in air filtration: A review. Nanomaterials, 11 (6), 1501.

Peng, J., Han, H., Yi, Y., Huang, H., & Xie, L. (2022). Machine learning and deep learning modeling and simulation for predicting PM2.5 concentrations. Chemosphere, 308 , 136353.

Power, M. C., Lamichhane, A. P., Liao, D., Xu, X., Jack, C. R., & Gottesman, R. F. (2018). The association of long-term exposure to particulate matter air pollution with brain MRI findings: The ARIC Study. Environmental Health Perspectives, 126 (2), 027009.

Qiao, D. W., Yao, J., Zhang, J. W., Li, X. L., Mi, T., & Zeng, W. (2022). Short-term air quality forecasting model based on hybrid RF-IACA-BPNN algorithm. Environmental Science and Pollution Research International, 29 (26), 39164–39181.

Saide, P. E., Carmichael, G. R., Spak, S. N., Gallardo, L., Osses, A. E., Mena-Carrasco, M. A., & Pagowski, M. (2011). Forecasting urban PM10 and PM2.5 pollution episodes in very stable nocturnal conditions and complex terrain using WRF–Chem CO tracer model. Atmospheric Environment, 45 (16), 2769–2780.

Shi, G., Liu, J., & Zhong, X. (2022). Spatial and temporal variations of PM2.5 concentrations in Chinese cities during 2015–2019. International Journal of Environmental Health Research, 32 (12), 2695–2707.

Verma, A., Ranga, V., & Vishwakarma, D. K. (2023). A novel approach for forecasting PM2.5 pollution in Delhi using CATALYST. Environmental Monitoring and Assessment, 195 (12), 1457.

Vignesh, P. P., Jiang, J. H., Kishore, P. J. E., & Science, S. (2023). Predicting PM2.5 concentrations across USA using machine learning. Earth and Space Science, 10 (10), e2023EA002911.

Wang, H., Chen, Z., & Zhang, P. (2022). Spatial autocorrelation and temporal convergence of PM2.5 concentrations in Chinese cities. International Journal of Environmental Research and Public Health, 19 (21), 13942.

Wang, W., Chen, C., Liu, D., Wang, M., Han, Q., & Zhang, X. (2022). Health risk assessment of PM(2.5) heavy metals in county units of northern China based on Monte Carlo simulation and APCS-MLR. Science of the Total Environment, 843 , 156777.

Wang, H., Zhang, L., Wu, R., & Cen, Y. (2023). Spatio-temporal fusion of meteorological factors for multi-site PM2.5 prediction: A deep learning and time-variant graph approach. Environmental Research, 239 , 117286.

Wang, J., Wang, D., Zhang, F., Yoo, C., & Liu, H. (2024). Soft sensor for predicting indoor PM2.5 concentration in subway with adaptive boosting deep learning model. Journal of Hazardous Materials, 465 , 133074.

Wang, Z., Xu, N., Bao, X., Wu, J., & Cui, X. (2024). Spatio-temporal deep learning model for accurate streamflow prediction with multi-source data fusion. Environmental Modelling and Software, 178 , 106091.

Wu, J., Dong, J., Wang, Z., Hu, Y., & Dou, W. (2023). A novel hybrid model based on deep learning and error correction for crude oil futures prices forecast. Resources Policy, 83 , 103602.

Xing, H., Wang, G., Liu, C., & Suo, M. (2021). PM2.5 concentration modeling and prediction by using temperature-based deep belief network. Neural Networks, 133 , 157–165.

Yang, Y., Ruan, Z., Wang, X., Yang, Y., Mason, T. G., Lin, H., & Tian, L. (2019). Short-term and long-term exposures to fine particulate matter constituents and health: A systematic review and meta-analysis. Environmental Pollution, 247 , 874–882.

Yao, Z., Wang, Z., Wang, D., Wu, J., & Chen, L. (2023), An ensemble CNN-LSTM and GRU adaptive weighting model based improved sparrow search algorithm for predicting runoff using historical meteorological and runoff data as input. Journal of Hydrology , 625 , 129977.

Yao, Z., Wang, Z., Huang, J., Xu, N., Cui, X., & Wu, J. (2024). Interpretable prediction, classification and regulation of water quality: A case study of Poyang Lake, China. Science of the Total Environment , 951 ,175407.

Yin, J., Wang, H., Wang, N., & Wang, X. (2023). An adaptive real-time modular tidal level prediction mechanism based on EMD and Lipschitz quotients method. Ocean Engineering, 289 , 116297.

Zhang, Z., Zeng, Y., & Yan, K. (2021). A hybrid deep learning technology for PM2.5 air quality forecasting. Environmental Science and Pollution Research, 28 (29), 39409–39422.

Zhang, C., Zou, Z., Wang, Z., & Wang, J. (2023). Ensemble deep learning modeling for Chlorophyll-a concentration prediction based on two-layer decomposition and attention mechanisms. Acta Geophysica, 72 (5), 3447–3471.

Zhao, X., Zhou, W., Han, L., & Locke, D. (2019). Spatiotemporal variation in PM2.5 concentrations and their relationship with socioeconomic factors in China’s major cities. Environment International, 133 , 105145.

Zhou, Y., Chang, F.-J., Chang, L.-C., Kao, I. F., & Wang, Y.-S. (2019). Explore a deep learning multi-output neural network for regional multi-step-ahead air quality forecasts. Journal of Cleaner Production, 209 , 134–145.

Download references

This research was supported by the Foundation for Humanities and Social Sciences Research of the Ministry of Education in China in 2024: Study on dynamic multi-objective optimization mechanism of the coupled system of hydrological prediction and scheduling of terrace reservoir groups in the upper reaches of the Yangtze River.

Author information

Authors and affiliations.

College of Information, Shanghai Ocean University, Hucheng Huan Road 999, Pudong Shanghai, Shanghai, 201306, P. R. China

Xinrong Xie, Zhaocai Wang, Manli Xu & Nannan Xu

You can also search for this author in PubMed   Google Scholar

Contributions

Xinrong Xie: Conceptualization, Methodology, Software, Data curation, Writing—original draft. Zhaocai Wang: Methodology, Data curation, Writing—review & editing, supervision. Manli Xu: Writing—review & editing. Nannan Xu: Methodology. All authors reviewed the manuscript.

Corresponding author

Correspondence to Zhaocai Wang .

Ethics declarations

Ethical approval.

No human participants or animal subjects were involved in this work; hence, ethics approval is not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (ZIP 1020 KB)

Rights and permissions.

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Xie, X., Wang, Z., Xu, M. et al. Daily PM2.5 concentration prediction based on variational modal decomposition and deep learning for multi-site temporal and spatial fusion of meteorological factors. Environ Monit Assess 196 , 859 (2024). https://doi.org/10.1007/s10661-024-13005-2

Download citation

Received : 11 April 2024

Accepted : 15 August 2024

Published : 29 August 2024

DOI : https://doi.org/10.1007/s10661-024-13005-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • PM2.5 concentration
  • Modal decomposition
  • Deep learning
  • Multifactorial
  • Spatio-temporal feature fusion
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. Critical Thinking

    critical thinking in monitoring and evaluation

  2. Using the Components of Critical Thinking to Develop Assessment

    critical thinking in monitoring and evaluation

  3. What is Evaluation in Critical Thinking?

    critical thinking in monitoring and evaluation

  4. Overview of Critical Thinking

    critical thinking in monitoring and evaluation

  5. Critical Thinking Skills Chart

    critical thinking in monitoring and evaluation

  6. -Critical Thinking Model from Cybernetics view

    critical thinking in monitoring and evaluation

VIDEO

  1. Critical Thinking Study💯🥂🥂

  2. How to Improve Your Critical Thinking Skills

  3. Top 10 Techniques for Enhancing Critical Thinking Skills

  4. Every COMMON Critical Thinking Skills Explained In Just 10 Minutes

  5. Critical Thinking & Logic (Lecture #8)- Argument Evaluation Part 1

  6. Critical Thinking vs. Problem Solving: Contrasting Analytical and Practical Skills.#thinking #skill

COMMENTS

  1. Defining and Teaching Evaluative Thinking:

    To that end, we propose that ET is essentially critical thinking applied to contexts of evaluation. We argue that ECB, and the field of evaluation more generally, would benefit from an explicit and transparent appropriation of well-established concepts and teaching strategies derived from the long history of work on critical thinking.

  2. Three ways to incorporate evaluative thinking in monitoring

    Three ways to incorporate evaluative thinking in monitoring. Monitoring and evaluation are traditionally considered to be interlinked but nevertheless distinct processes. Monitoring is an ongoing system of gathering information and tracking a project's performance using pre-selected indicators. Evaluation, by contrast, is about making overall ...

  3. Perspectives on Monitoring and Evaluation

    All subjects Allied Health Cardiology & Cardiovascular Medicine Dentistry Emergency Medicine & Critical Care Endocrinology & Metabolism Environmental Science General Medicine Geriatrics Infectious Diseases Medico-legal Neurology Nursing Nutrition Obstetrics & Gynecology Oncology ... Monitoring and Evaluation Training: A Systematic Approach ...

  4. Metacognitive Strategies and Development of Critical Thinking in Higher

    We understand that "critical thinking is a knowledge-seeking process via reasoning skills to solve problems and make decisions which allows us ... In R, we find: organization (O), monitoring (M), and evaluation (E). This instrument comprehensively, and fairly clearly, brings together essential aspects of metacognition. On one side, there is ...

  5. The Role of Evidence Evaluation in Critical Thinking: Fostering

    By epistemically vigilant we mean evaluating and monitoring the credibility and trustworthiness of information while being aware of the potential of being misinformed (Sperber et al., 2010). Epistemic vigilance is vital to critical thinking. We draw on Kuhn's (2018) definition of critical thinking as argumentation.

  6. Developing Critical Thinking Within Centralized Monitoring Teams

    Evaluation: Through producing a final report and assessing the outcome, the evaluation stage concludes the critical thinking cycle and overall process. Still, continuing open discussions among teams is one of the best ways to support and expand the team knowledge and understanding of the risk-based monitoring approach, while external ...

  7. Introduction to Monitoring and Evaluation: The Basics

    Monitoring and evaluation (M&E) is a critical process for assessing the performance and effectiveness of programs, projects, and policies. This process involves collecting and analyzing data on program activities, outputs, outcomes, and impact to determine whether the desired results have been achieved.

  8. Evaluative thinking

    Evaluative thinking is introduced as a form of critical thinking, and the resource then goes on to describe several key considerations in applying the technique. ... Evaluation Initiative, a global network of organizations and experts supporting country governments to strengthen monitoring, evaluation, and the use of evidence in their countries

  9. The Importance of Monitoring and Evaluation for Decision-Making

    Most of us, if not all of us, use informal M&E in our everyday decision-making processes. M&E is a management process that combines the oversight (monitoring) with the assessment of choices, processes, decisions, actions, and results (evaluation). It has two main uses: internal and external (see Fig. 4.1).

  10. Introduction to monitoring and evaluation

    Monitoring and evaluation are important parts of RBM, based on clearly defined and measurable results, processes, methodologies and tools to achieve results.M&E can be viewed as providing a set of tools to enable RBM, helping decision makers track progress and demonstrate an intervention's higher-level results.6 Results-based M&E moves from a focus on the immediate results, such as the ...

  11. Principles for effective use of systems thinking in evaluation

    A preamble provides a useful, concise discussion of the field of systems thinking, some high-level definitions and general considerations to inform the use of the principles in evaluation. Guidance on 'Systems-in-evaluation' and each of the four inter-related principles: Interrelationships, Perspectives, Boundaries and Dynamics, include:

  12. Critical Thinking

    Critical thinking refers to the process of actively analyzing, assessing, synthesizing, evaluating and reflecting on information gathered from observation, experience, or communication. It is thinking in a clear, logical, reasoned, and reflective manner to solve problems or make decisions. Basically, critical thinking is taking a hard look at ...

  13. 10 Reasons Why Monitoring and Evaluation is Important

    Monitoring and evaluation can help fuel innovative thinking and methods for data collection. While some fields require specific methods, others are open to more unique ideas. As an example, fields that have traditionally relied on standardized tools like questionnaires, focus groups, interviews, and so on can branch out to video and photo ...

  14. PDF Basic Principles of Monitoring and Evaluation

    over time (monitoring); how effectively a programme was implemented and whether there are gaps between the planned and achieved results (evaluation); and whether the changes in well-being are due to the programme and to the programme alone (impact evaluation). Monitoring is a continuous process of collecting and analysing

  15. Collaborative Learning and Critical Thinking

    The cognitive skills of analysis, interpretation, inference, explanation, evaluation, and of monitoring and correcting one's own reasoning are at the heart of critical thinking (APA 1990). Critical thinking not only mimics the process of scientific investigation - identifying a question, formulating a hypothesis, gathering and analyzing ...

  16. Triangulation

    Triangulation facilitates validation of data through cross verification from more than two sources. It tests the consistency of findings obtained through different instruments and increases the chance to control, or at least assess some of the threats or multiple causes influencing our results. Triangulation is not just about validation but ...

  17. Linking planning with monitoring & evaluation

    Planning, monitoring and evaluation are at the heart of a learning-based approach to management. Achieving collaborative, business/environmental or personal goals requires effective planning and follow-through. The plan is effectively a "route-map" from the present to the future. To plan a … Linking planning with monitoring & evaluation - closing the loop Read More »

  18. Monitoring and evaluation for thinking and working politically

    All subjects Allied Health Cardiology & Cardiovascular Medicine Dentistry Emergency Medicine & Critical Care Endocrinology & Metabolism Environmental ... Monitoring and evaluation for thinking and working politically ... Tyrrel L, Kelly L, Roche C, et al. (2020) Uncertainty and COVID-19: A turning point for monitoring evaluation, research and ...

  19. Best Monitoring & Evaluation Courses Online with Certificates [2024

    In summary, here are 10 of our most popular monitoring and evaluation courses. Measuring the Success of a Patient Safety or Quality Improvement Project (Patient Safety VI): Johns Hopkins University. Monitoring and Observability for Development and DevOps: IBM. Reviews & Metrics for Software Improvements: University of Alberta.

  20. PDF

    This document provides a guide to using the MEAL DPro (Monitoring, Evaluation, Accountability and Learning Digital Professional) toolkit. It is licensed under the Creative Commons Attribution-NonCommercial license. The guide acknowledges contributions from various organizations that informed its development. MEAL (Monitoring, Evaluation, Accountability and Learning) is a key part of project ...

  21. Monitoring and Evaluation Officer Job Description

    Proven experience in monitoring and evaluation roles, preferably in the development or nonprofit sector. Strong understanding of monitoring and evaluation concepts, frameworks, and methodologies. Proficiency in quantitative and qualitative data analysis techniques and tools. Excellent analytical, critical thinking, and problem-solving skills.

  22. Metacognitive writing strategies, critical thinking skills, and

    Magno considered critical thinking an outcome of metacognition because critical thinking is formed through the "development and evaluation of arguments and coming up with inferences" (p. 139). Schuster ( 2019 ) conceptualized critical thinking as a "commitment to letting logic and reasoning be the driving force in guiding judgment and ...

  23. How to teach students critical thinking skills to combat misinformation

    At least 21 state legislatures have taken steps to reform K-12 media and information literacy education, with California, Delaware, Illinois, and New Jersey passing comprehensive reforms (U.S. Media Literacy Policy Report, Media Literacy Now, 2024).The largely bipartisan efforts are a response to challenges that most school curriculums do not yet address or teach—skills like sorting out ...

  24. Evaluation and Prevention of Perioperative Respiratory Failure

    Respiratory failure is a common perioperative complication. The risk of respiratory failure can be reduced with effective preoperative evaluation, preventative measures, and knowledge of evidence-based management techniques. Effective preoperative screening methods include ARISCAT scoring, OSA screening, and the LAS VEGAS score (including the ASA physical status score). Evaluation by the six ...

  25. Technologies

    In recent years, the integration of Internet of Things technologies in smart agriculture has become critical for sustainability and efficiency, to the extent that recent improvements have transformed greenhouse farming. This study investigated the complexity of IoT architecture in smart greenhouses by introducing a greenhouse language family (GreenH) that comprises three domain-specific ...

  26. Daily PM2.5 concentration prediction based on variational modal

    Air pollution, particularly PM2.5, has long been a critical concern for the atmospheric environment. Accurately predicting daily PM2.5 concentrations is crucial for both environmental protection and public health. This study introduces a new hybrid model within the "Decomposition-Prediction-Integration" (DPI) framework, which combines variational modal decomposition (VMD), causal ...