Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform

Putting Applied Research to Work in Your School or District

What is applied research, steps of applied research, form a research team, develop research questions, design research and collect data, analyze processes and disseminate findings, why you should try applied research, figure 1. the elements of applied research.

  • The types of data to be collected (such as interviews, focus groups, surveys, observational field notes, student and teacher work, and existing quantitative metrics).
  • The timeline for collecting data (such as one month, one semester, or an entire year).
  • A plan for analyzing data (such as dividing up the data into manageable chunks, discussing themes that emerge from the data).
  • Strategies for validating data (such as having multiple data sources and multiple interviewers).
  • The types of research participants (such as what constituencies are important to understanding a diversity of perspectives on the issue).

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action..

Science in School

Science in School

What is it good for basic versus applied research teach article.

Author(s): Martin McHugh, Marcus Baumann, Sarah Hayes, F. Jerry Reen, Laurie Ryan, Davide Tiana, Jessica Whelan

Basic research is often misunderstood by the public and misconstrued by the media. Try this role play to learn how research is funded and how basic research advances and protects society.

In 2019, an international research group published a paper examining the effect of the song Scary Monsters and Nice Sprites by Skrillex on the breeding behaviours of mosquitos. [ 1 ]  The paper became a viral news story, with many media outlets using the ‘obscure’ research story to generate clicks. However, the research concluded that, when mosquitos were exposed to the song, they bit less and refrained from mating. The paper generated equal amounts of praise and criticism but highlights the potential of basic research and creative thinking in science. Indeed, the historical problem with basic research is the lack of immediate commercial objectives. To non-scientists, basic research can seem like a waste of money, whereas applied research, designed to solve practical problems with obvious scientific and societal benefits, seems like a better use of resources.

The following activity will bring the debate into the classroom and allow students to explore the pros and cons of basic and applied research. Using an argumentation framework, students will discuss the merits of a variety of research projects, with updates to show how some of them later turned out to be important for vaccine development for COVID-19.

What kinds of research should be funded?

In this activity, students will be divided into groups of funders and scientists. Using the materials provided, the scientists will pitch their research proposals to the funders, who will have €100 000 at their disposal. The activity will also provide cues to promote argumentation among students to develop critical thinking, reasoning, communication, and scientific literacy skills. [ 2 ]

importance of applied research in education

Learning objectives and context

After the activity, students should understand

  • how scientific research is funded and that this involves difficult decisions;
  • the difference between basic and applied research;
  • how applied research relies on basic research findings, and that it is difficult to predict what might become useful.

To set the scene, students should be asked who they think funds scientific research. Students will generate multiple answers, from the government to universities and industry. Truthfully, funding can come from a variety of sources and can be public, private, national, or international.

The next question is how do funding bodies select what research should be funded. Scientific research is often broadly divided into two types: basic research (also called fundamental research) and applied research.

  • Basic research is about pushing the boundaries of our understanding and generating new knowledge. An example is researching how a physiological process works at the molecular level.
  • Applied research involves applying existing knowledge to create solutions to specific problems. An example is developing a treatment for a disease.

However, many research projects have elements of both basic and applied research. Research scientists from around the world must compete and push the merits of their work to get funding.

The following role-play activity will put students in the shoes of both the funding bodies and scientists. In groups, students will be asked to pitch their project proposal to the funders, who will ultimately decide how to allocate €100 000 to a variety of projects.

A key element of this lesson is to encourage debate and argumentation. Students acting as scientists should try to convince funders with their words. They should be encouraged to make claims, rebuttals, and back up their statements with data, if possible. Each scientist will have an individual text that will give them the information to argue effectively. To support debate, funders are given a list of key questions, along with more probing questions. This activity can also be extended over multiple lessons to allow students time to debate.

Funder information sheet

Project proposal cards

Discussion cards

  • For this role-play activity, divide students into groups of five or six. Each group requires four scientists and at least one funder.
  • Hand out the project proposal cards to the four scientists in each group.There are four project proposals and each scientist should get a different one. One of these proposals is highly applied, while the others are more basic. All funders receive the same information sheet and can allocate €100 000. If there are two funders in a single group, then they must come to a consensus.
  • Give students 10 minutes to read over their documents. Funders need to be aware of the key questions (on the information sheet) they can use to assess the proposals. Scientists need to be aware of the key arguments they need to make to receive funding (on the proposal cards).
  • Each scientist then gets 2 minutes uninterrupted to make their ‘pitch’ for funding. Once complete, funders need to ask key questions and all scientists are allowed argue their positions against each other. This should take around 15 minutes.
  • At the end of the activity, funders are asked to fill in the funding-allocation table at the bottom of their information sheet. This is to be kept private.
  • In turn, ask the funders from each group to the front of the class. The table on their sheets can be copied onto the board and funders can fill this out. Once complete, they need to give a brief justification to the class for their decision.
  • Throughout this process, ask the students if they are seeing any patterns emerging in the funding between groups.
  • Ask whether the students think each project is more basic or applied.
  • Next, hand out the discussion cards to each group. Project 3 is purely applied and has a clear link to vaccines, but these cards describe how proposals 1, 2, and 4 turned out to be fundamental to the development of the COVID-19 vaccine in unexpected ways.
  • Get the class to discuss whether this new information would have changed their funding decisions.
  • Discuss whether the applications envisioned by the researchers were necessarily those that turned out to be important.

As previously stated, the goal of this activity is that students understand how research is funded and the differences between applied and basic research. The activity is designed to highlight how basic research often forms the foundation for applied research. Both types of research are important, but basic research can be perceived negatively in the eyes of the public. It is often impossible to predict how knowledge gained through a basic research project could be vital for an application in the future. Often multiple scientific advances have to be combined for an applied impact. Sometimes, scientists must accept that they may not be able to identify an immediate application for new knowledge generated. However, without new knowledge, we may lack the foundation for future applications that could be years away.

importance of applied research in education

In this example, the three more basic research proposals proved to be vital to the final application. This can be easily illustrated with proposal cards 1 and 3. Proposal card 1 discusses modified mRNA, and this research underpinned the manufacture of the COVID-19 vaccine. The two proposals are so closely linked that you can replace the word ‘polynucleotide(s)’ with mRNA on proposal card 3 and the document still makes perfect sense.

As a follow up to this activity, ask students to go online and find the most obscure and weird basic scientific research (that has been published in a peer-reviewed journal) they can find. The Ig Nobel Prizes are a good source of inspiration for this. Similar to the mosquito example used in the introduction to this activity, get the students to find practical applications behind the headlines and articles.

[1] Dieng H et al. (2019). The electronic song “Scary Monsters and Nice Sprites” reduces host attack and mating success in the dengue vector Aedes aegypti . Acta tropica 194 :93–99. doi: 10.1016/j.actatropica.2019.03.027

[2] Erduran S, Ozdem Y, Park JY (2015). Research trends on argumentation in science education: a journal content analysis from 1998–2014 . International Journal of STEM Education , 2 :5. doi: 10.1186/s40594-015-0020-1.

  • Discover CRISPR-Cas9 and how it revolutionized gene editing: Chan H (2016)  Faster, cheaper, CRISPR: the new gene technology revolution .  Science in School   38 :18–21.
  • Read an article on different techniques to resolve and predict protein structures: Heber S (2021)  From gaming to cutting-edge biology: AI and the protein folding problem .  Science in School   52 .
  • Read an article on how modern vaccines work: Paréj K (2021)  Vaccines in the spotlight .  Science in School   53 .
  • Visit the Annals of Improbable Research , which runs the Ig Nobel Prizes, to learn more about research that makes you laugh and then makes you think.
  • Read a simple explanation of basic research and its importance from the National Institute of Health.
  • Read a short article from Harvard University on the importance of basic research .
  • Watch a video on the potential uses of CRISPR outside gene editing.
  • Watch a video on how 50 years of fundamental research enabled the rapid development of mRNA vaccines for COVID-19.
  • Read an article from STAT describing the main steps that – 50 years later – led to COVID-19 mRNA vaccines .
  • Watch a video introducing the ESRF and its 41 beamlines .
  • Read an article from Scientific American that underlines the important issue of research funding and final profits .
  • Read an article from c&en magazine on synchrotrons and their uses .
  • Read an   interview with Katalin Karikó  in  Scientific American  that discusses her role in developing the mRNA technology used in COVID-19 vaccines.

Dr Martin McHugh is the education and public engagement officer for SSPC , the Science Foundation Ireland (SFI) research centre for pharmaceuticals at the University of Limerick. Formerly a researcher in informal learning and part-time lecturer on science education, he has degrees from NUI Galway and the University of Edinburgh in environmental science and teaching. He is also a qualified secondary school science and biology teacher.

Dr Marcus Baumann is an assistant professor in the School of Chemistry at University College Dublin. He leads a research group aiming to develop new methods for the sustainable generation of drug-like molecules through the use of continuous-flow technologies. These methods are based on using light and enzymes in combination with machines to synthesise biologically active molecules.

Dr Sarah Hayes is the chief operating officer (COO) of SSPC . Sarah’s background is in physical chemistry and she received her PhD in Science Education. Sarah has many years of teaching experience as a physics and chemistry teacher. Through her various roles, she has been involved in research, curriculum development, and continuous professional development courses. Her most significant focus has been informal and non-formal learning and engagement.

Dr Jerry Reen is a lecturer in molecular microbial ecology at University College Cork. His research team study polymicrobial biofilm communities to understand molecular communication systems between species in disease and biotechnology. They also apply molecular technologies to harness biocatalytic proteins and bioactive compounds of marine origin.

Laurie Ryan is an assistant lecturer in general science at Athlone Institute of Technology (AIT). She is a former secondary school science teacher and conducts research in the area of STEM education and outreach. She is currently finishing her PhD examining argumentation in non-formal learning environments.

Dr Davide Tiana is a lecturer in inorganic chemistry at University College Cork. His independent group uses computational chemistry to study, understand, and explain chemistry. Their research goals range from developing new models to better explain chemical interactions (e.g., chemical bonding, dispersion forces) to the design of new molecules such as nanodrugs.

Dr Jessica Whelan is a lecturer at the University College Dublin School of Chemical and Bioprocess Engineering. Her research focuses on developing tools and approaches to optimize the production of proteins, vaccines, and cell and gene therapies. The aim is to make medicines available to patients at the highest quality and lowest cost possible.

Supporting materials

Download this article as a PDF

Share this article

Subscribe to our newsletter.

  • Copy/Paste Link Link Copied

Using Research and Reason in Education: How Teachers Can Use Scientifically Based Research to Make Curricular & Instructional Decisions

Paula J. Stanovich and Keith E. Stanovich University of Toronto

Produced by RMC Research Corporation, Portsmouth, New Hampshire

This publication was produced under National Institute for Literacy Contract No. ED-00CO-0093 with RMC Research Corporation. Sandra Baxter served as the contracting officer's technical representative. The views expressed herein do not necessarily represent the policies of the National Institute for Literacy. No official endorsement by the National Institute for Literacy or any product, commodity, service, or enterprise is intended or should be inferred.

The National Institute for Literacy

Sandra Baxter, Interim Executive Director Lynn Reddy, Communications Director

To order copies of this booklet, contact the National Institute for Literacy at EdPubs, PO Box 1398, Jessup, MD 20794-1398. Call 800-228-8813 or email [email protected] .

The National Institute for Literacy, an independent federal organization, supports the development of high quality state, regional, and national literacy services so that all Americans can develop the literacy skills they need to succeed at work, at home, and in the community.

The Partnership for Reading, a project administered by the National Institute for Literacy, is a collaborative effort of the National Institute for Literacy, the National Institute of Child Health and Human Development, the U.S. Department of Education, and the U.S. Department of Health and Human Services to make evidence-based reading research available to educators, parents, policy makers, and others with an interest in helping all people learn to read well.

Editorial support provided by C. Ralph Adler and Elizabeth Goldman, and design/production support provided by Diane Draper and Bob Kozman, all of RMC Research Corporation.

Introduction

In the recent move toward standards-based reform in public education, many educational reform efforts require schools to demonstrate that they are achieving educational outcomes with students performing at a required level of achievement. Federal and state legislation, in particular, has codified this standards-based movement and tied funding and other incentives to student achievement.

At first, demonstrating student learning may seem like a simple task, but reflection reveals that it is a complex challenge requiring educators to use specific knowledge and skills. Standards-based reform has many curricular and instructional prerequisites. The curriculum must represent the most important knowledge, skills, and attributes that schools want their students to acquire because these learning outcomes will serve as the basis of assessment instruments. Likewise, instructional methods should be appropriate for the designed curriculum. Teaching methods should lead to students learning the outcomes that are the focus of the assessment standards.

Standards- and assessment-based educational reforms seek to obligate schools and teachers to supply evidence that their instructional methods are effective. But testing is only one of three ways to gather evidence about the effectiveness of instructional methods. Evidence of instructional effectiveness can come from any of the following sources:

  • Demonstrated student achievement in formal testing situations implemented by the teacher, school district, or state;
  • Published findings of research-based evidence that the instructional methods being used by teachers lead to student achievement; or
  • Proof of reason-based practice that converges with a research-based consensus in the scientific literature. This type of justification of educational practice becomes important when direct evidence may be lacking (a direct test of the instructional efficacy of a particular method is absent), but there is a theoretical link to research-based evidence that can be traced.

Each of these methods has its pluses and minuses. While testing seems the most straightforward, it is not necessarily the clear indicator of good educational practice that the public seems to think it is. The meaning of test results is often not immediately clear. For example, comparing averages or other indicators of overall performance from tests across classrooms, schools, or school districts takes no account of the resources and support provided to a school, school district, or individual professional. Poor outcomes do not necessarily indict the efforts of physicians in Third World countries who work with substandard equipment and supplies. Likewise, objective evidence of below-grade or below-standard mean performance of a group of students should not necessarily indict their teachers if essential resources and supports (e.g., curriculum materials, institutional aid, parental cooperation) to support teaching efforts were lacking. However, the extent to which children could learn effectively even in under-equipped schools is not known because evidence-based practices are, by and large, not implemented. That is, there is evidence that children experiencing academic difficulties can achieve more educationally if they are taught with effective methods; sadly, scientific research about what works does not usually find its way into most classrooms.

Testing provides a useful professional calibrator, but it requires great contextual sensitivity in interpretation. It is not the entire solution for assessing the quality of instructional efforts. This is why research-based and reason-based educational practice are also crucial for determining the quality and impact of programs. Teachers thus have the responsibility to be effective users and interpreters of research. Providing a survey and synthesis of the most effective practices for a variety of key curriculum goals (such as literacy and numeracy) would seem to be a helpful idea, but no document could provide all of that information. (Many excellent research syntheses exist, such as the National Reading Panel, 2000; Snow, Burns, & Griffin, 1998; Swanson, 1999, but the knowledge base about effective educational practices is constantly being updated, and many issues remain to be settled.)

As professionals, teachers can become more effective and powerful by developing the skills to recognize scientifically based practice and, when the evidence is not available, use some basic research concepts to draw conclusions on their own. This paper offers a primer for those skills that will allow teachers to become independent evaluators of educational research.

The Formal Scientific Method and Scientific Thinking in Educational Practice

When you go to your family physician with a medical complaint, you expect that the recommended treatment has proven to be effective with many other patients who have had the same symptoms. You may even ask why a particular medication is being recommended for you. The doctor may summarize the background knowledge that led to that recommendation and very likely will cite summary evidence from the drug's many clinical trials and perhaps even give you an overview of the theory behind the drug's success in treating symptoms like yours.

All of this discussion will probably occur in rather simple terms, but that does not obscure the fact that the doctor has provided you with data to support a theory about your complaint and its treatment. The doctor has shared knowledge of medical science with you. And while everyone would agree that the practice of medicine has its "artful" components (for example, the creation of a healing relationship between doctor and patient), we have come to expect and depend upon the scientific foundation that underpins even the artful aspects of medical treatment. Even when we do not ask our doctors specifically for the data, we assume it is there, supporting our course of treatment.

Actually, Vaughn and Dammann (2001) have argued that the correct analogy is to say that teaching is in part a craft, rather than an art. They point out that craft knowledge is superior to alternative forms of knowledge such as superstition and folklore because, among other things, craft knowledge is compatible with scientific knowledge and can be more easily integrated with it. One could argue that in this age of education reform and accountability, educators are being asked to demonstrate that their craft has been integrated with science--that their instructional models, methods, and materials can be likened to the evidence a physician should be able to produce showing that a specific treatment will be effective. As with medicine, constructing teaching practice on a firm scientific foundation does not mean denying the craft aspects of teaching.

Architecture is another professional practice that, like medicine and education, grew from being purely a craft to a craft based firmly on a scientific foundation. Architects wish to design beautiful buildings and environments, but they must also apply many foundational principles of engineering and adhere to structural principles. If they do not, their buildings, however beautiful they may be, will not stand. Similarly, a teacher seeks to design lessons that stimulate students and entice them to learn--lessons that are sometimes a beauty to behold. But if the lessons are not based in the science of pedagogy, they, like poorly constructed buildings, will fail.

Education is informed by formal scientific research through the use of archival research-based knowledge such as that found in peer-reviewed educational journals. Preservice teachers are first exposed to the formal scientific research in their university teacher preparation courses (it is hoped), through the instruction received from their professors, and in their course readings (e.g., textbooks, journal articles). Practicing teachers continue their exposure to the results of formal scientific research by subscribing to and reading professional journals, by enrolling in graduate programs, and by becoming lifelong learners.

Scientific thinking in practice is what characterizes reflective teachers--those who inquire into their own practice and who examine their own classrooms to find out what works best for them and their students. What follows in this document is, first, a "short course" on how to become an effective consumer of the archival literature that results from the conduct of formal scientific research in education and, second, a section describing how teachers can think scientifically in their ongoing reflection about their classroom practice.

Being able to access mechanisms that evaluate claims about teaching methods and to recognize scientific research and its findings is especially important for teachers because they are often confronted with the view that "anything goes" in the field of education--that there is no such thing as best practice in education, that there are no ways to verify what works best, that teachers should base their practice on intuition, or that the latest fad must be the best way to teach, please a principal, or address local school reform. The "anything goes" mentality actually represents a threat to teachers' professional autonomy. It provides a fertile environment for gurus to sell untested educational "remedies" that are not supported by an established research base.

Teachers as independent evaluators of research evidence

One factor that has impeded teachers from being active and effective consumers of educational science has been a lack of orientation and training in how to understand the scientific process and how that process results in the cumulative growth of knowledge that leads to validated educational practice. Educators have only recently attempted to resolve educational disputes scientifically, and teachers have not yet been armed with the skills to evaluate disputes on their own.

Educational practice has suffered greatly because its dominant model for resolving or adjudicating disputes has been more political (with its corresponding factions and interest groups) than scientific. The field's failure to ground practice in the attitudes and values of science has made educators susceptible to the "authority syndrome" as well as fads and gimmicks that ignore evidence-based practice.

When our ancestors needed information about how to act, they would ask their elders and other wise people. Contemporary society and culture are much more complex. Mass communication allows virtually anyone (on the Internet, through self-help books) to proffer advice, to appear to be a "wise elder." The current problem is how to sift through the avalanche of misguided and uninformed advice to find genuine knowledge. Our problem is not information; we have tons of information. What we need are quality control mechanisms.

Peer-reviewed research journals in various disciplines provide those mechanisms. However, even with mechanisms like these in behavioral science and education, it is all too easy to do an "end run" around the quality control they provide. Powerful information dissemination outlets such as publishing houses and mass media frequently do not discriminate between good and bad information. This provides a fertile environment for gurus to sell untested educational "remedies" that are not supported by an established research base and, often, to discredit science, scientific evidence, and the notion of research-based best practice in education. As Gersten (2001) notes, both seasoned and novice teachers are "deluged with misinformation" (p. 45).

We need tools for evaluating the credibility of these many and varied sources of information; the ability to recognize research-based conclusions is especially important. Acquiring those tools means understanding scientific values and learning methods for making inferences from the research evidence that arises through the scientific process. These values and methods were recently summarized by a panel of the National Academy of Sciences convened on scientific inquiry in education (Shavelson & Towne, 2002), and our discussion here will be completely consistent with the conclusions of that NAS panel.

The scientific criteria for evaluating knowledge claims are not complicated and could easily be included in initial teacher preparation programs, but they usually are not (which deprives teachers from an opportunity to become more efficient and autonomous in their work right at the beginning of their careers). These criteria include:

  • the publication of findings in refereed journals (scientific publications that employ a process of peer review),
  • the duplication of the results by other investigators, and
  • a consensus within a particular research community on whether there is a critical mass of studies that point toward a particular conclusion.

In their discussion of the evolution of the American Educational Research Association (AERA) conference and the importance of separating research evidence from opinion when making decisions about instructional practice, Levin and O'Donnell (2000) highlight the importance of enabling teachers to become independent evaluators of research evidence. Being aware of the importance of research published in peer-reviewed scientific journals is only the first step because this represents only the most minimal of criteria. Following is a review of some of the principles of research-based evaluation that teachers will find useful in their work.

Publicly verifiable research conclusions: Replication and Peer Review

Source credibility: the consumer protection of peer reviewed journals..

The front line of defense for teachers against incorrect information in education is the existence of peer-reviewed journals in education, psychology, and other related social sciences. These journals publish empirical research on topics relevant to classroom practice and human cognition and learning. They are the first place that teachers should look for evidence of validated instructional practices.

As a general quality control mechanism, peer review journals provide a "first pass" filter that teachers can use to evaluate the plausibility of educational claims. To put it more concretely, one ironclad criterion that will always work for teachers when presented with claims of uncertain validity is the question: Have findings supporting this method been published in recognized scientific journals that use some type of peer review procedure? The answer to this question will almost always separate pseudoscientific claims from the real thing.

In a peer review, authors submit a paper to a journal for publication, where it is critiqued by several scientists. The critiques are reviewed by an editor (usually a scientist with an extensive history of work in the specialty area covered by the journal). The editor then decides whether the weight of opinion warrants immediate publication, publication after further experimentation and statistical analysis, or rejection because the research is flawed or does not add to the knowledge base. Most journals carry a statement of editorial policy outlining their exact procedures for publication, so it is easy to check whether a journal is in fact, peer-reviewed.

Peer review is a minimal criterion, not a stringent one. Not all information in peer-reviewed scientific journals is necessarily correct, but it has at the very least undergone a cycle of peer criticism and scrutiny. However, it is because the presence of peer-reviewed research is such a minimal criterion that its absence becomes so diagnostic. The failure of an idea, a theory, an educational practice, behavioral therapy, or a remediation technique to have adequate documentation in the peer-reviewed literature of a scientific discipline is a very strong indication to be wary of the practice.

The mechanisms of peer review vary somewhat from discipline to discipline, but the underlying rationale is the same. Peer review is one way (replication of a research finding is another) that science institutionalizes the attitudes of objectivity and public criticism. Ideas and experimentation undergo a honing process in which they are submitted to other critical minds for evaluation. Ideas that survive this critical process have begun to meet the criterion of public verifiability. The peer review process is far from perfect, but it really is the only external consumer protection that teachers have.

The history of reading instruction illustrates the high cost that is paid when the peer-reviewed literature is ignored, when the normal processes of scientific adjudication are replaced with political debates and rhetorical posturing. A vast literature has been generated on best practices that foster children's reading acquisition (Adams, 1990; Anderson, Hiebert, Scott, & Wilkinson, 1985; Chard & Osborn, 1999; Cunningham & Allington, 1994; Ehri, Nunes, Stahl, & Willows, 2001; Moats, 1999; National Reading Panel, 2000; Pearson, 1993; Pressley, 1998; Pressley, Rankin, & Yokol, 1996; Rayner, Foorman, Perfetti, Pesetsky, & Seidenberg, 2002; Reading Coherence Initiative, 1999; Snow, Burns, & Griffin, 1998; Spear-Swerling & Sternberg, 2001). Yet much of this literature remains unknown to many teachers, contributing to the frustrating lack of clarity about accepted, scientifically validated findings and conclusions on reading acquisition.

Teachers should also be forewarned about the difference between professional education journals that are magazines of opinion in contrast to journals where primary reports of research, or reviews of research, are peer reviewed. For example, the magazines Phi Delta Kappan and Educational Leadership both contain stimulating discussions of educational issues, but neither is a peer-reviewed journal of original research. In contrast, the American Educational Research Journal (a flagship journal of the AERA) and the Journal of Educational Psychology (a flagship journal of the American Psychological Association) are both peer-reviewed journals of original research. Both are main sources for evidence on validated techniques of reading instruction and for research on aspects of the reading process that are relevant to a teacher's instructional decisions.

This is true, too, of presentations at conferences of educational organizations. Some are data-based presentations of original research. Others are speeches reflecting personal opinion about educational problems. While these talks can be stimulating and informative, they are not a substitute for empirical research on educational effectiveness.

Replication and the importance of public verifiability.

Research-based conclusions about educational practice are public in an important sense: they do not exist solely in the mind of a particular individual but have been submitted to the scientific community for criticism and empirical testing by others. Knowledge considered "special"--the province of the thought of an individual and immune from scrutiny and criticism by others--can never have the status of scientific knowledge. Research-based conclusions, when published in a peer reviewed journal, become part of the public realm, available to all, in a way that claims of "special expertise" are not.

Replication is the second way that science uses to make research-based conclusions concrete and "public." In order to be considered scientific, a research finding must be presented to other researchers in the scientific community in a way that enables them to attempt the same experiment and obtain the same results. When the same results occur, the finding has been replicated . This process ensures that a finding is not the result of the errors or biases of a particular investigator. Replicable findings become part of the converging evidence that forms the basis of a research-based conclusion about educational practice.

John Donne told us that "no man is an island." Similarly, in science, no researcher is an island. Each investigator is connected to the research community and its knowledge base. This interconnection enables science to grow cumulatively and for research-based educational practice to be built on a convergence of knowledge from a variety of sources. Researchers constantly build on previous knowledge in order to go beyond what is currently known. This process is possible only if research findings are presented in such a way that any investigator can use them to build on.

Philosopher Daniel Dennett (1995) has said that science is "making mistakes in public. Making mistakes for all to see, in the hopes of getting the others to help with the corrections" (p. 380). We might ask those proposing an educational innovation for the evidence that they have in fact "made some mistakes in public." Legitimate scientific disciplines can easily provide such evidence. For example, scientists studying the psychology of reading once thought that reading difficulties were caused by faulty eye movements. This hypothesis has been shown to be in error, as has another that followed it, that so-called visual reversal errors were a major cause of reading difficulty. Both hypotheses were found not to square with the empirical evidence (Rayner, 1998; Share & Stanovich, 1995). The hypothesis that reading difficulties can be related to language difficulties at the phonological level has received much more support (Liberman, 1999; National Reading Panel, 2000; Rayner, Foorman, Perfetti, Pesetsky, & Seidenberg, 2002; Shankweiler, 1999; Stanovich, 2000).

After making a few such "errors" in public, reading scientists have begun, in the last 20 years, to get it right. But the only reason teachers can have confidence that researchers are now "getting it right" is that researchers made it open, public knowledge when they got things wrong. Proponents of untested and pseudoscientific educational practices will never point to cases where they "got it wrong" because they are not committed to public knowledge in the way that actual science is. These proponents do not need, as Dennett says, "to get others to help in making the corrections" because they have no intention of correcting their beliefs and prescriptions based on empirical evidence.

Education is so susceptible to fads and unproven practices because of its tacit endorsement of a personalistic view of knowledge acquisition--one that is antithetical to the scientific value of the public verifiability of knowledge claims. Many educators believe that knowledge resides within particular individuals--with particularly elite insights--who then must be called upon to dispense this knowledge to others. Indeed, some educators reject public, depersonalized knowledge in social science because they believe it dehumanizes people. Science, however, with its conception of publicly verifiable knowledge, actually democratizes knowledge. It frees practitioners and researchers from slavish dependence on authority.

Subjective, personalized views of knowledge degrade the human intellect by creating conditions that subjugate it to an elite whose "personal" knowledge is not accessible to all (Bronowski, 1956, 1977; Dawkins, 1998; Gross, Levitt, & Lewis, 1997; Medawar, 1982, 1984, 1990; Popper, 1972; Wilson, 1998). Empirical science, by generating knowledge and moving it into the public domain, is a liberating force. Teachers can consult the research and decide for themselves whether the state of the literature is as the expert portrays it. All teachers can benefit from some rudimentary grounding in the most fundamental principles of scientific inference. With knowledge of a few uncomplicated research principles, such as control, manipulation, and randomization, anyone can enter the open, public discourse about empirical findings. In fact, with the exception of a few select areas such as the eye movement research mentioned previously, much of the work described in noted summaries of reading research (e.g., Adams, 1990; Snow, Burns, & Griffin, 1998) could easily be replicated by teachers themselves.

There are many ways that the criteria of replication and peer review can be utilized in education to base practitioner training on research-based best practice. Take continuing teacher education in the form of inservice sessions, for example. Teachers and principals who select speakers for professional development activities should ask speakers for the sources of their conclusions in the form of research evidence in peer-reviewed journals. They should ask speakers for bibliographies of the research evidence published on the practices recommended in their presentations.

The science behind research-based practice relies on systematic empiricism

Empiricism is the practice of relying on observation. Scientists find out about the world by examining it. The refusal by some scientists to look into Galileo's telescope is an example of how empiricism has been ignored at certain points in history. It was long believed that knowledge was best obtained through pure thought or by appealing to authority. Galileo claimed to have seen moons around the planet Jupiter. Another scholar, Francesco Sizi, attempted to refute Galileo, not with observations, but with the following argument:

There are seven windows in the head, two nostrils, two ears, two eyes and a mouth; so in the heavens there are two favorable stars, two unpropitious, two luminaries, and Mercury alone undecided and indifferent. From which and many other similar phenomena of nature such as the seven metals, etc., which it were tedious to enumerate, we gather that the number of planets is necessarily seven...ancient nations, as well as modern Europeans, have adopted the division of the week into seven days, and have named them from the seven planets; now if we increase the number of planets, this whole system falls to the ground...moreover, the satellites are invisible to the naked eye and therefore can have no influence on the earth and therefore would be useless and therefore do not exist. (Holton & Roller, 1958, p. 160)

Three centuries of the demonstrated power of the empirical approach give us an edge on poor Sizi. Take away those years of empiricism, and many of us might have been there nodding our heads and urging him on. In fact, the empirical approach is not necessarily obvious, which is why we often have to teach it, even in a society that is dominated by science.

Empiricism pure and simple is not enough, however. Observation itself is fine and necessary, but pure, unstructured observation of the natural world will not lead to scientific knowledge. Write down every observation you make from the time you get up in the morning to the time you go to bed on a given day. When you finish, you will have a great number of facts, but you will not have a greater understanding of the world. Scientific observation is termed systematic because it is structured so that the results of the observation reveal something about the underlying causal structure of events in the world. Observations are structured so that, depending upon the outcome of the observation, some theories of the causes of the outcome are supported and others rejected.

Teachers can benefit by understanding two things about research and causal inferences. The first is the simple (but sometimes obscured) fact that statements about best instructional practices are statements that contain a causal claim. These statements claim that one type of method or practice causes superior educational outcomes. Second, teachers must understand how the logic of the experimental method provides the critical support for making causal inferences.

Science addresses testable questions

Science advances by positing theories to account for particular phenomena in the world, by deriving predictions from these theories, by testing the predictions empirically, and by modifying the theories based on the tests (the sequence is typically theory -> prediction -> test -> theory modification). What makes a theory testable? A theory must have specific implications for observable events in the natural world.

Science deals only with a certain class of problem: the kind that is empirically solvable. That does not mean that different classes of problems are inherently solvable or unsolvable and that this division is fixed forever. Quite the contrary: some problems that are currently unsolvable may become solvable as theory and empirical techniques become more sophisticated. For example, decades ago historians would not have believed that the controversial issue of whether Thomas Jefferson had a child with his slave Sally Hemings was an empirically solvable question. Yet, by 1998, this problem had become solvable through advances in genetic technology, and a paper was published in the journal Nature (Foster, Jobling, Taylor, Donnelly, Deknijeff, Renemieremet, Zerjal, & Tyler-Smith, 1998) on the question.

The criterion of whether a problem is "testable" is called the falsifiability criterion: a scientific theory must always be stated in such a way that the predictions derived from it can potentially be shown to be false. The falsifiability criterion states that, for a theory to be useful, the predictions drawn from it must be specific. The theory must go out on a limb, so to speak, because in telling us what should happen, the theory must also imply that certain things will not happen. If these latter things do happen, it is a clear signal that something is wrong with the theory. It may need to be modified, or we may need to look for an entirely new theory. Either way, we will end up with a theory that is closer to the truth.

In contrast, if a theory does not rule out any possible observations, then the theory can never be changed, and we are frozen into our current way of thinking with no possibility of progress. A successful theory cannot posit or account for every possible happening. Such a theory robs itself of any predictive power.

What we are talking about here is a certain type of intellectual honesty. In science, the proponent of a theory is always asked to address this question before the data are collected: "What data pattern would cause you to give up, or at least to alter, this theory?" In the same way, the falsifiability criterion is a useful consumer protection for the teacher when evaluating claims of educational effectiveness. Proponents of an educational practice should be asked for evidence; they should also be willing to admit that contrary data will lead them to abandon the practice. True scientific knowledge is held tentatively and is subject to change based on contrary evidence. Educational remedies not based on scientific evidence will often fail to put themselves at risk by specifying what data patterns would prove them false.

Objectivity and intellectual honesty

Objectivity, another form of intellectual honesty in research, means that we let nature "speak for itself" without imposing our wishes on it--that we report the results of experimentation as accurately as we can and that we interpret them as fairly as possible. (The fact that this goal is unattainable for any single human being should not dissuade us from holding objectivity as a value.)

In the language of the general public, open-mindedness means being open to possible theories and explanations for a particular phenomenon. But in science it means that and something more. Philosopher Jonathan Adler (1998) teaches us that science values another aspect of open-mindedness even more highly: "What truly marks an open-minded person is the willingness to follow where evidence leads. The open-minded person is willing to defer to impartial investigations rather than to his own predilections...Scientific method is attunement to the world, not to ourselves" (p. 44).

Objectivity is critical to the process of science, but it does not mean that such attitudes must characterize each and every scientist for science as a whole to work. Jacob Bronowski (1973, 1977) often argued that the unique power of science to reveal knowledge about the world does not arise because scientists are uniquely virtuous (that they are completely objective or that they are never biased in interpreting findings, for example). It arises because fallible scientists are immersed in a process of checks and balances --a process in which scientists are always there to criticize and to root out errors. Philosopher Daniel Dennett (1999/2000) points out that "scientists take themselves to be just as weak and fallible as anybody else, but recognizing those very sources of error in themselvesÉthey have devised elaborate systems to tie their own hands, forcibly preventing their frailties and prejudices from infecting their results" (p. 42). More humorously, psychologist Ray Nickerson (1998) makes the related point that the vanities of scientists are actually put to use by the scientific process, by noting that it is "not so much the critical attitude that individual scientists have taken with respect to their own ideas that has given science its success...but more the fact that individual scientists have been highly motivated to demonstrate that hypotheses that are held by some other scientists are false" (p. 32). These authors suggest that the strength of scientific knowledge comes not because scientists are virtuous, but from the social process where scientists constantly cross-check each others' knowledge and conclusions.

The public criteria of peer review and replication of findings exist in part to keep checks on the objectivity of individual scientists. Individuals cannot hide bias and nonobjectivity by personalizing their claims and keeping them from public scrutiny. Science does not accept findings that have failed the tests of replication and peer review precisely because it wants to ensure that all findings in science are in the public domain, as defined above. Purveyors of pseudoscientific educational practices fail the test of objectivity and are often identifiable by their attempts to do an "end run" around the public mechanisms of science by avoiding established peer review mechanisms and the information-sharing mechanisms that make replication possible. Instead, they attempt to promulgate their findings directly to consumers, such as teachers.

The principle of converging evidence

The principle of converging evidence has been well illustrated in the controversies surrounding the teaching of reading. The methods of systematic empiricism employed in the study of reading acquisition are many and varied. They include case studies, correlational studies, experimental studies, narratives, quasi-experimental studies, surveys, epidemiological studies and many others. The results of many of these studies have been synthesized in several important research syntheses (Adams, 1990; Ehri et al., 2001; National Reading Panel, 2000; Pressley, 1998; Rayner et al., 2002; Reading Coherence Initiative, 1999; Share & Stanovich, 1995; Snow, Burns, & Griffin, 1998; Snowling, 2000; Spear-Swerling & Sternberg, 2001; Stanovich, 2000). These studies were used in a process of establishing converging evidence, a principle that governs the drawing of the conclusion that a particular educational practice is research-based.

The principle of converging evidence is applied in situations requiring a judgment about where the "preponderance of evidence" points. Most areas of science contain competing theories. The extent to which a particular study can be seen as uniquely supporting one particular theory depends on whether other competing explanations have been ruled out. A particular experimental result is never equally relevant to all competing theories. An experiment may be a very strong test of one or two alternative theories but a weak test of others. Thus, research is considered highly convergent when a series of experiments consistently supports a given theory while collectively eliminating the most important competing explanations. Although no single experiment can rule out all alternative explanations, taken collectively, a series of partially diagnostic experiments can lead to a strong conclusion if the data converge.

Contrast this idea of converging evidence with the mistaken view that a problem in science can be solved with a single, crucial experiment, or that a single critical insight can advance theory and overturn all previous knowledge. This view of scientific progress fits nicely with the operation of the news media, in which history is tracked by presenting separate, disconnected "events" in bite-sized units. This is a gross misunderstanding of scientific progress and, if taken too seriously, leads to misconceptions about how conclusions are reached about research-based practices.

One experiment rarely decides an issue, supporting one theory and ruling out all others. Issues are most often decided when the community of scientists gradually begins to agree that the preponderance of evidence supports one alternative theory rather than another. Scientists do not evaluate data from a single experiment that has finally been designed in the perfect way. They most often evaluate data from dozens of experiments, each containing some flaws but providing part of the answer.

Although there are many ways in which an experiment can go wrong (or become confounded ), a scientist with experience working on a particular problem usually has a good idea of what most of the critical factors are, and there are usually only a few. The idea of converging evidence tells us to examine the pattern of flaws running through the research literature because the nature of this pattern can either support or undermine the conclusions that we might draw.

For example, suppose that the findings from a number of different experiments were largely consistent in supporting a particular conclusion. Given the imperfect nature of experiments, we would evaluate the extent and nature of the flaws in these studies. If all the experiments were flawed in a similar way, this circumstance would undermine confidence in the conclusions drawn from them because the consistency of the outcome may simply have resulted from a particular, consistent flaw. On the other hand, if all the experiments were flawed in different ways, our confidence in the conclusions increases because it is less likely that the consistency in the results was due to a contaminating factor that confounded all the experiments. As Anderson and Anderson (1996) note, "When a conceptual hypothesis survives many potential falsifications based on different sets of assumptions, we have a robust effect." (p. 742).

Suppose that five different theoretical summaries (call them A, B, C, D, and E) of a given set of phenomena exist at one time and are investigated in a series of experiments. Suppose that one set of experiments represents a strong test of theories A, B, and C, and that the data largely refute theories A and B and support C. Imagine also that another set of experiments is a particularly strong test of theories C, D, and E, and that the data largely refute theories D and E and support C. In such a situation, we would have strong converging evidence for theory C. Not only do we have data supportive of theory C, but we have data that contradict its major competitors. Note that no one experiment tests all the theories, but taken together, the entire set of experiments allows a strong inference.

In contrast, if the two sets of experiments each represent strong tests of B, C, and E, and the data strongly support C and refute B and E, the overall support for theory C would be less strong than in our previous example. The reason is that, although data supporting theory C have been generated, there is no strong evidence ruling out two viable alternative theories (A and D). Thus research is highly convergent when a series of experiments consistently supports a given theory while collectively eliminating the most important competing explanations. Although no single experiment can rule out all alternative explanations, taken collectively, a series of partially diagnostic experiments can lead to a strong conclusion if the data converge in the manner of our first example.

Increasingly, the combining of evidence from disparate studies to form a conclusion is being done more formally by the use of the statistical technique termed meta-analysis (Cooper & Hedges, 1994; Hedges & Olkin, 1985; Hunter & Schmidt, 1990; Rosenthal, 1995; Schmidt, 1992; Swanson, 1999) which has been used extensively to establish whether various medical practices are research based. In a medical context, meta-analysis:

involves adding together the data from many clinical trials to create a single pool of data big enough to eliminate much of the statistical uncertainty that plagues individual trials...The great virtue of meta-analysis is that clear findings can emerge from a group of studies whose findings are scattered all over the map. (Plotkin,1996, p. 70)

The use of meta-analysis for determining the research validation of educational practices is just the same as in medicine. The effects obtained when one practice is compared against another are expressed in a common statistical metric that allows comparison of effects across studies. The findings are then statistically amalgamated in some standard ways (Cooper & Hedges, 1994; Hedges & Olkin, 1985; Swanson, 1999) and a conclusion about differential efficacy is reached if the amalgamation process passes certain statistical criteria. In some cases, of course, no conclusion can be drawn with confidence, and the result of the meta-analysis is inconclusive.

More and more commentators on the educational research literature are calling for a greater emphasis on meta-analysis as a way of dampening the contentious disputes about conflicting studies that plague education and other behavioral sciences (Kavale & Forness, 1995; Rosnow & Rosenthal, 1989; Schmidt, 1996; Stanovich, 2001; Swanson, 1999). The method is useful for ending disputes that seem to be nothing more than a "he-said, she-said" debate. An emphasis on meta-analysis has often revealed that we actually have more stable and useful findings than is apparent from a perusal of the conflicts in our journals.

The National Reading Panel (2000) found just this in their meta-analysis of the evidence surrounding several issues in reading education. For example, they concluded that the results of a meta-analysis of the results of 66 comparisons from 38 different studies indicated "solid support for the conclusion that systematic phonics instruction makes a bigger contribution to children's growth in reading than alternative programs providing unsystematic or no phonics instruction" (p. 2-84). In another section of their report, the National Reading Panel reported that a meta-analysis of 52 studies of phonemic awareness training indicated that "teaching children to manipulate the sounds in language helps them learn to read. Across the various conditions of teaching, testing, and participant characteristics, the effect sizes were all significantly greater than chance and ranged from large to small, with the majority in the moderate range. Effects of phonemic awareness training on reading lasted well beyond the end of training" (p. 2-5).

A statement by a task force of the American Psychological Association (Wilkinson, 1999) on statistical methods in psychology journals provides an apt summary for this section. The task force stated that investigators should not "interpret a single study's results as having importance independent of the effects reported elsewhere in the relevant literature" (p. 602). Science progresses by convergence upon conclusions. The outcomes of one study can only be interpreted in the context of the present state of the convergence on the particular issue in question.

The logic of the experimental method

Scientific thinking is based on the ideas of comparison, control, and manipulation . In a true experimental study, these characteristics of scientific investigation must be arranged to work in concert.

Comparison alone is not enough to justify a causal inference. In methodology texts, correlational investigations (which involve comparison only) are distinguished from true experimental investigations that warrant much stronger causal inferences because they involve comparison, control, and manipulation. The mere existence of a relationship between two variables does not guarantee that changes in one are causing changes in the other. Correlation does not imply causation.

There are two potential problems with drawing causal inferences from correlational evidence. The first is called the third-variable problem. It occurs when the correlation between the two variables does not indicate a direct causal path between them but arises because both variables are related to a third variable that has not even been measured.

The second reason is called the directionality problem. It creates potential interpretive difficulties because even if two variables have a direct causal relationship, the direction of that relationship is not indicated by the mere presence of the correlation. In short, a correlation between variables A and B could arise because changes in A are causing changes in B or because changes in B are causing changes in A. The mere presence of the correlation does not allow us to decide between these two possibilities.

The heart of the experimental method lies in manipulation and control. In contrast to a correlational study, where the investigator simply observes whether the natural fluctuation in two variables displays a relationship, the investigator in a true experiment manipulates the variable thought to be the cause (the independent variable) and looks for an effect on the variable thought to be the effect (the dependent variable ) while holding all other variables constant by control and randomization. This method removes the third-variable problem because, in the natural world, many different things are related. The experimental method may be viewed as a way of prying apart these naturally occurring relationships. It does so because it isolates one particular variable (the hypothesized cause) by manipulating it and holding everything else constant (control).

When manipulation is combined with a procedure known as random assignment (in which the subjects themselves do not determine which experimental condition they will be in but, instead, are randomly assigned to one of the experimental groups), scientists can rule out alternative explanations of data patterns. By using manipulation, experimental control, and random assignment, investigators construct stronger comparisons so that the outcome eliminates alternative theories and explanations.

The need for both correlational methods and true experiments

As strong as they are methodologically, studies employing true experimental logic are not the only type that can be used to draw conclusions. Correlational studies have value. The results from many different types of investigation, including correlational studies, can be amalgamated to derive a general conclusion. The basis for conclusion rests on the convergence observed from the variety of methods used. This is most certainly true in classroom and curriculum research. It is necessary to amalgamate the results from not only experimental investigations, but correlational studies, nonequivalent control group studies, time series designs, and various other quasi-experimental designs and multivariate correlational designs, All have their strengths and weaknesses. For example, it is often (but not always) the case that experimental investigations are high in internal validity, but limited in external validity, whereas correlational studies are often high in external validity, but low in internal validity.

Internal validity concerns whether we can infer a causal effect for a particular variable. The more a study employs the logic of a true experiment (i.e., includes manipulation, control, and randomization), the more we can make a strong causal inference. External validity concerns the generalizability of the conclusion to the population and setting of interest. Internal and external validity are often traded off across different methodologies. Experimental laboratory investigations are high in internal validity but may not fully address concerns about external validity. Field classroom investigations, on the other hand, are often quite high in external validity but because of the logistical difficulties involved in carrying them out, they are often quite low in internal validity. That is why we need to look for a convergence of results, not just consistency from one method. Convergence increases our confidence in the external and internal validity of our conclusions.

Again, this underscores why correlational studies can contribute to knowledge. First, some variables simply cannot be manipulated for ethical reasons (for instance, human malnutrition or physical disabilities). Other variables, such as birth order, sex, and age, are inherently correlational because they cannot be manipulated, and therefore the scientific knowledge concerning them must be based on correlational evidence. Finally, logistical difficulties in classroom and curriculum research often make it impossible to achieve the logic of the true experiment. However, this circumstance is not unique to educational or psychological research. Astronomers obviously cannot manipulate all the variables affecting the objects they study, yet they are able to arrive at conclusions.

Complex correlational techniques are essential in the absence of experimental research because complex correlational statistics such as multiple regression, path analysis, and structural equation modeling that allow for the partial control of third variables when those variables can be measured. These statistics allow us to recalculate the correlation between two variables after the influence of other variables is removed. If a potential third variable can be measured, complex correlational statistics can help us determine whether that third variable is determining the relationship. These correlational statistics and designs help to rule out certain causal hypotheses, even if they cannot demonstrate the true causal relation definitively.

Stages of scientific investigation: The Role of Case Studies and Qualitative Investigations

The educational literature includes many qualitative investigations that focus less on issues of causal explanation and variable control and more on thick description , in the manner of the anthropologist (Geertz, 1973, 1979). The context of a person's behavior is described as much as possible from the standpoint of the participant. Many different fields (e.g., anthropology, psychology, education) contain case studies where the focus is detailed description and contextualization of the situation of a single participant (or very few participants).

The usefulness of case studies and qualitative investigations is strongly determined by how far scientific investigation has advanced in a particular area. The insights gained from case studies or qualitative investigations may be quite useful in the early stages of an investigation of a certain problem. They can help us determine which variables deserve more intense study by drawing attention to heretofore unrecognized aspects of a person's behavior and by suggesting how understanding of behavior might be sharpened by incorporating the participant's perspective.

However, when we move from the early stages of scientific investigation, where case studies may be very useful, to the more mature stages of theory testing--where adjudicating between causal explanations is the main task--the situation changes drastically. Case studies and qualitative description are not useful at the later stages of scientific investigation because they cannot be used to confirm or disconfirm a particular causal theory. They lack the comparative information necessary to rule out alternative explanations.

Where qualitative investigations are useful relates strongly to a distinction in philosophy of science between the context of discovery and the context of justification . Qualitative research, case studies, and clinical observations support a context of discovery where, as Levin and O'Donnell (2000) note in an educational context, such research must be regarded as "preliminary/exploratory, observational, hypothesis generating" (p. 26). They rightly point to the essential importance of qualitative investigations because "in the early stages of inquiry into a research topic, one has to look before one can leap into designing interventions, making predictions, or testing hypotheses" (p. 26). The orientation provided by qualitative investigations is critical in such cases. Even more important, the results of quantitative investigations--which must sometimes abstract away some of the contextual features of a situation--are often contextualized by the thick situational description provided by qualitative work.

However, in the context of justification, variables must be measured precisely, large groups must be tested to make sure the conclusion generalizes and, most importantly, many variables must be controlled because alternative causal explanations must be ruled out. Gersten (2001) summarizes the value of qualitative research accurately when he says that "despite the rich insights they often provide, descriptive studies cannot be used as evidence for an intervention's efficacy...descriptive research can only suggest innovative strategies to teach students and lay the groundwork for development of such strategies" (p. 47). Qualitative research does, however, help to identify fruitful directions for future experimental studies.

Nevertheless, here is why the sole reliance on qualitative techniques to determine the effectiveness of curricula and instructional strategies has become problematic. As a researcher, you desire to do one of two things.

Objective A

The researcher wishes to make some type of statement about a relationship, however minimal. That is, you at least want to use terms like greater than, or less than, or equal to. You want to say that such and such an educational program or practice is better than another. "Better than" and "worse than" are, of course, quantitative statements--and, in the context of issues about what leads to or fosters greater educational achievement, they are causal statements as well . As quantitative causal statements, the support for such claims obviously must be found in the experimental logic that has been outlined above. To justify such statements, you must adhere to the canons of quantitative research logic.

Objective B

The researcher seeks to adhere to an exclusively qualitative path that abjures statements about relationships and never uses comparative terms of magnitude. The investigator desires to simply engage in thick description of a domain that may well prompt hypotheses when later work moves on to the more quantitative methods that are necessary to justify a causal inference.

Investigators pursuing Objective B are doing essential work. They provide quantitative information with suggestions for richer hypotheses to study. In education, however, investigators sometimes claim to be pursuing Objective B but slide over into Objective A without realizing they have made a crucial switch. They want to make comparative, or quantitative, statements, but have not carried out the proper types of investigation to justify them. They want to say that a certain educational program is better than another (that is, it causes better school outcomes). They want to give educational strictures that are assumed to hold for a population of students, not just to the single or few individuals who were the objects of the qualitative study. They want to condemn an educational practice (and, by inference, deem an alternative quantitatively and causally better). But instead of taking the necessary course of pursuing Objective A, they carry out their investigation in the manner of Objective B.

Let's recall why the use of single case or qualitative description as evidence in support of a particular causal explanation is inappropriate. The idea of alternative explanations is critical to an understanding of theory testing. The goal of experimental design is to structure events so that support of one particular explanation simultaneously disconfirms other explanations. Scientific progress can occur only if the data that are collected rule out some explanations. Science sets up conditions for the natural selection of ideas. Some survive empirical testing and others do not.

This is the honing process by which ideas are sifted so that those that contain the most truth are found. But there must be selection in this process: data collected as support for a particular theory must not leave many other alternative explanations as equally viable candidates. For this reason, scientists construct control or comparison groups in their experimentation. These groups are formed so that, when their results are compared with those from an experimental group, some alternative explanations are ruled out.

Case studies and qualitative description lack the comparative information necessary to prove that a particular theory or educational practice is superior, because they fail to test an alternative; they rule nothing out. Take the seminal work of Jean Piaget for example. His case studies were critical in pointing developmental psychology in new and important directions, but many of his theoretical conclusions and causal explanations did not hold up in controlled experiments (Bjorklund, 1995; Goswami, 1998; Siegler, 1991).

In summary, as educational psychologist Richard Mayer (2000) notes, "the domain of science includes both some quantitative and qualitative methodologies" (p. 39), and the key is to use each where it is most effective (see Kamil, 1995). Likewise, in their recent book on research-based best practices in comprehension instruction, Block and Pressley (2002) argue that future progress in understanding how comprehension works will depend on a healthy interaction between qualitative and quantitative approaches. They point out that getting an initial idea of the comprehension processes involved in hypertext and Web-based environments will involve detailed descriptive studies using think-alouds and assessments of qualitative decision making. Qualitative studies of real reading environments will set the stage for more controlled investigations of causal hypotheses.

The progression to more powerful methods

A final useful concept is the progression to more powerful research methods ("more powerful" in this context meaning more diagnostic of a causal explanation). Research on a particular problem often proceeds from weaker methods (ones less likely to yield a causal explanation) to ones that allow stronger causal inferences. For example, interest in a particular hypothesis may originally emerge from a particular case study of unusual interest. This is the proper role for case studies: to suggest hypotheses for further study with more powerful techniques and to motivate scientists to apply more rigorous methods to a research problem. Thus, following the case studies, researchers often undertake correlational investigations to verify whether the link between variables is real rather than the result of the peculiarities of a few case studies. If the correlational studies support the relationship between relevant variables, then researchers will attempt experiments in which variables are manipulated in order to isolate a causal relationship between the variables.

Summary of principles that support research-based inferences about best practice

Our sketch of the principles that support research-based inferences about best practice in education has revealed that:

  • Science progresses by investigating solvable, or testable, empirical problems.
  • To be testable, a theory must yield predictions that could possible be shown to be wrong.
  • The concepts in the theories in science evolve as evidence accumulates. Scientific knowledge is not infallible knowledge, but knowledge that has at least passed some minimal tests. The theories behind research-based practice can be proven wrong, and therefore they contain a mechanism for growth and advancement.
  • Theories are tested by systematic empiricism. The data obtained from empirical research are in the public domain in the sense that they are presented in a manner that allows replication and criticism by other scientists.
  • Data and theories in science are considered in the public domain only after publication in peer-reviewed scientific journals.
  • Empiricism is systematic because it strives for the logic of control and manipulation that characterizes a true experiment.
  • Correlational techniques are helpful when the logic of an experiment cannot be approximated, but because these techniques only help rule out hypotheses, they are considered weaker than true experimental methods.
  • Researchers use many different methods to arrive at their conclusions, and the strengths and weaknesses of these methods vary. Most often, conclusions are drawn only after a slow accumulation of data from many studies.

Scientific thinking in educational practice: Reason-based practice in the absence of direct evidence

Some areas in educational research, to date, lack a research-based consensus, for a number of reasons. Perhaps the problem or issue has not been researched extensively. Perhaps research into the issue is in the early stages of investigation, where descriptive studies are suggesting interesting avenues, but no controlled research justifying a causal inference has been completed. Perhaps many correlational studies and experiments have been conducted on the issue, but the research evidence has not yet converged in a consistent direction.

Even if teachers know the principles of scientific evaluation described earlier, the research literature sometimes fails to give them clear direction. They will have to fall back on their own reasoning processes as informed by their own teaching experiences. In those cases, teachers still have many ways of reasoning scientifically.

Tracing the link from scientific research to scientific thinking in practice

Scientific thinking in can be done in several ways. Earlier we discussed different types of professional publications that teachers can read to improve their practice. The most important defining feature of these outlets is whether they are peer reviewed. Another defining feature is whether the publication contains primary research rather than presenting opinion pieces or essays on educational issues. If a journal presents primary research, we can evaluate the research using the formal scientific principles outlined above.

If the journal is presenting opinion pieces about what constitutes best practice, we need to trace the link between those opinions and archival peer-reviewed research. We would look to see whether the authors have based their opinions on peer-reviewed research by reading the reference list. Do the authors provide a significant amount of original research citations (is their opinion based on more than one study)? Do the authors cite work other than their own (have the results been replicated)? Are the cited journals peer-reviewed? For example, in the case of best practice for reading instruction, if we came across an article in an opinion-oriented journal such as Intervention in School and Clinic, we might look to see if the authors have cited work that has appeared in such peer-reviewed journals as Journal of Educational Psychology , Elementary School Journal , Journal of Literacy Research , Scientific Studies of Reading , or the Journal of Learning Disabilities .

These same evaluative criteria can be applied to presenters at professional development workshops or papers given at conferences. Are they conversant with primary research in the area on which they are presenting? Can they provide evidence for their methods and does that evidence represent a scientific consensus? Do they understand what is required to justify causal statements? Are they open to the possibility that their claims could be proven false? What evidence would cause them to shift their thinking?

An important principle of scientific evaluation--the connectivity principle (Stanovich, 2001)--can be generalized to scientific thinking in the classroom. Suppose a teacher comes upon a new teaching method, curriculum component, or process. The method is advertised as totally new, which provides an explanation for the lack of direct empirical evidence for the method. A lack of direct empirical evidence should be grounds for suspicion, but should not immediately rule it out. The principle of connectivity means that the teacher now has another question to ask: "OK, there is no direct evidence for this method, but how is the theory behind it (the causal model of the effects it has) connected to the research consensus in the literature surrounding this curriculum area?" Even in the absence of direct empirical evidence on a particular method or technique, there could be a theoretical link to the consensus in the existing literature that would support the method.

For further tips on translating research into classroom practice, see Warby, Greene, Higgins, & Lovitt (1999). They present a format for selecting, reading, and evaluating research articles, and then importing the knowledge gained into the classroom.

Let's take an imaginary example from the domain of treatments for children with extreme reading difficulties. Imagine two treatments have been introduced to a teacher. No direct empirical tests of efficacy have been carried out using either treatment. The first, Treatment A, is a training program to facilitate the awareness of the segmental nature of language at the phonological level. The second, Treatment B, involves giving children training in vestibular sensitivity by having them walk on balance beams while blindfolded. Treatment A and B are equal in one respect--neither has had a direct empirical test of its efficacy, which reflects badly on both. Nevertheless, one of the treatments has the edge when it comes to the principle of connectivity. Treatment A makes contact with a broad consensus in the research literature that children with extraordinary reading difficulties are hampered because of insufficiently developed awareness of the segmental structure of language. Treatment B is not connected to any corresponding research literature consensus. Reason dictates that Treatment A is a better choice, even though neither has been directly tested.

Direct connections with research-based evidence and use of the connectivity principle when direct empirical evidence is absent give us necessary cross-checks on some of the pitfalls that arise when we rely solely on personal experience. Drawing upon personal experience is necessary and desirable in a veteran teacher, but it is not sufficient for making critical judgments about the effectiveness of an instructional strategy or curriculum. The insufficiency of personal experience becomes clear if we consider that the educational judgments--even of veteran teachers--often are in conflict. That is why we have to adjudicate conflicting knowledge claims using the scientific method.

Let us consider two further examples that demonstrate why we need controlled experimentation to verify even the most seemingly definitive personal observations. In the 1990s, considerable media and professional attention were directed at a method for aiding the communicative capacity of autistic individuals. This method is called facilitated communication. Autistic individuals who had previously been nonverbal were reported to have typed highly literate messages on a keyboard when their hands and arms were supported over the typewriter by a so-called facilitator. These startlingly verbal performances by autistic children who had previously shown very limited linguistic behavior raised incredible hopes among many parents of autistic children.

Unfortunately, claims for the efficacy of facilitated communication were disseminated by many media outlets before any controlled studies had been conducted. Since then, many studies have appeared in journals in speech science, linguistics, and psychology and each study has unequivocally demonstrated the same thing: the autistic child's performance is dependent upon tactile cueing from the facilitator. In the experiments, it was shown that when both child and facilitator were looking at the same drawing, the child typed the correct name of the drawing. When the viewing was occluded so that the child and the facilitator were shown different drawings, the child typed the name of the facilitator's drawing, not the one that the child herself was looking at (Beck & Pirovano, 1996; Burgess, Kirsch, Shane, Niederauer, Graham, & Bacon, 1998; Hudson, Melita, & Arnold, 1993; Jacobson, Mulick, & Schwartz, 1995; Wheeler, Jacobson, Paglieri, & Schwartz, 1993). The experimental studies directly contradicted the extensive case studies of the experiences of the facilitators of the children. These individuals invariably deny that they have inadvertently cued the children. Their personal experience, honest and heartfelt though it is, suggests the wrong model for explaining this outcome. The case study evidence told us something about the social connections between the children and their facilitators. But that is something different than what we got from the controlled experimental studies, which provided direct tests of the claim that the technique unlocks hidden linguistic skills in these children. Even if the claim had turned out to be true, the verification of the proof of its truth would not have come from the case studies or personal experiences, but from the necessary controlled studies.

Another example of the need for controlled experimentation to test the insights gleaned from personal experience is provided by the concept of learning styles--the idea that various modality preferences (or variants of this theme in terms of analytic/holistic processing or "learning styles") will interact with instructional methods, allowing teachers to individualize learning. The idea seems to "feel right" to many of us. It does seem to have some face validity, but it has never been demonstrated to work in practice. Its modern incarnation (see Gersten, 2001, Spear-Swerling & Sternberg, 2001) takes a particularly harmful form, one where students identified as auditory learners are matched with phonics instruction and visual and/or kinesthetic learners matched with holistic instruction. The newest form is particularly troublesome because the major syntheses of reading research demonstrate that many children can benefit from phonics-based instruction, not just "auditory" learners (National Reading Panel, 2000; Rayner et al., 2002; Stanovich, 2000). Excluding students identified as "visual/kinesthetic" learners from effective phonics instruction is a bad instructional practice--bad because it is not only not research based, it is actually contradicted by research.

A thorough review of the literature by Arter and Jenkins (1979) found no consistent evidence for the idea that modality strengths and weaknesses could be identified in a reliable and valid way that warranted differential instructional prescriptions. A review of the research evidence by Tarver and Dawson (1978) found likewise that the idea of modality preferences did not hold up to empirical scrutiny. They concluded, "This review found no evidence supporting an interaction between modality preference and method of teaching reading" (p. 17). Kampwirth and Bates (1980) confirmed the conclusions of the earlier reviews, although they stated their conclusions a little more baldly: "Given the rather general acceptance of this idea, and its common-sense appeal, one would presume that there exists a body of evidence to support it. UnfortunatelyÉno such firm evidence exists" (p. 598).

More recently, the idea of modality preferences (also referred to as learning styles, holistic versus analytic processing styles, and right versus left hemispheric processing) has again surfaced in the reading community. The focus of the recent implementations refers more to teaching to strengths, as opposed to remediating weaknesses (the latter being more the focus of the earlier efforts in the learning disabilities field). The research of the 1980s was summarized in an article by Steven Stahl (1988). His conclusions are largely negative because his review of the literature indicates that the methods that have been used in actual implementations of the learning styles idea have not been validated. Stahl concludes: "As intuitively appealing as this notion of matching instruction with learning style may be, past research has turned up little evidence supporting the claim that different teaching methods are more or less effective for children with different reading styles" (p. 317).

Obviously, such research reviews cannot prove that there is no possible implementation of the idea of learning styles that could work. However, the burden of proof in science rests on the investigator who is making a new claim about the nature of the world. It is not incumbent upon critics of a particular claim to show that it "couldn't be true." The question teachers might ask is, "Have the advocates for this new technique provided sufficient proof that it works?" Their burden of responsibility is to provide proof that their favored methods work. Teachers should not allow curricular advocates to avoid this responsibility by introducing confusion about where the burden of proof lies. For example, it is totally inappropriate and illogical to ask "Has anyone proved that it can't work?" One does not "prove a negative" in science. Instead, hypotheses are stated, and then must be tested by those asserting the hypotheses.

Reason-based practice in the classroom

Effective teachers engage in scientific thinking in their classrooms in a variety of ways: when they assess and evaluate student performance, develop Individual Education Plans (IEPs) for their students with disabilities, reflect on their practice, or engage in action research. For example, consider the assessment and evaluation activities in which teachers engage. The scientific mechanisms of systematic empiricism--iterative testing of hypotheses that are revised after the collection of data--can be seen when teachers plan for instruction: they evaluate their students' previous knowledge, develop hypotheses about the best methods for attaining lesson objectives, develop a teaching plan based on those hypotheses, observe the results, and base further instruction on the evidence collected.

This assessment cycle looks even more like the scientific method when teachers (as part of a multidisciplinary team) are developing and implementing an IEP for a student with a disability. The team must assess and evaluate the student's learning strengths and difficulties, develop hypotheses about the learning problems, select curriculum goals and objectives, base instruction on the hypotheses and the goals selected, teach, and evaluate the outcomes of that teaching. If the teaching is successful (goals and objectives are attained), the cycle continues with new goals. If the teaching has been unsuccessful (goals and objectives have not been achieved), the cycle begins again with new hypotheses. We can also see the principle of converging evidence here. No one piece of evidence might be decisive, but collectively the evidence might strongly point in one direction.

Scientific thinking in practice occurs when teachers engage in action research. Action research is research into one's own practice that has, as its main aim, the improvement of that practice. Stokes (1997) discusses how many advances in science came about as a result of "use-inspired research" which draws upon observations in applied settings. According to McNiff, Lomax, and Whitehead (1996), action research shares several characteristics with other types of research: "it leads to knowledge, it provides evidence to support this knowledge, it makes explicit the process of enquiry through which knowledge emerges, and it links new knowledge with existing knowledge" (p. 14). Notice the links to several important concepts: systematic empiricism, publicly verifiable knowledge, converging evidence, and the connectivity principle.

Teachers and Research Commonality in a "what works" epistemology

Many educational researchers have drawn attention to the epistemological commonalities between researchers and teachers (Gersten, Vaughn, Deshler, & Schiller, 1997; Stanovich, 1993/1994). A "what works" epistemology is a critical source of underlying unity in the world views of educators and researchers (Gersten & Dimino, 2001; Gersten, Chard, & Baker, 2000). Empiricism, broadly construed (as opposed to the caricature of white coats, numbers, and test tubes that is often used to discredit scientists) is about watching the world, manipulating it when possible, observing outcomes, and trying to associate outcomes with features observed and with manipulations. This is what the best teachers do. And this is true despite the grain of truth in the statement that "teaching is an art." As Berliner (1987) notes: "No one I know denies the artistic component to teaching. I now think, however, that such artistry should be research-based. I view medicine as an art, but I recognize that without its close ties to science it would be without success, status, or power in our society. Teaching, like medicine, is an art that also can be greatly enhanced by developing a close relationship to science (p. 4)."

In his review of the work of the Committee on the Prevention of Reading Difficulties for the National Research Council of the National Academy of Sciences (Snow, Burns, & Griffin, 1998), Pearson (1999) warned educators that resisting evaluation by hiding behind the "art of teaching" defense will eventually threaten teacher autonomy. Teachers need creativity, but they also need to demonstrate that they know what evidence is, and that they recognize that they practice in a profession based in behavioral science. While making it absolutely clear that he opposes legislative mandates, Pearson (1999) cautions:

We have a professional responsibility to forge best practice out of the raw materials provided by our most current and most valid readings of research...If professional groups wish to retain the privileges of teacher prerogative and choice that we value so dearly, then the price we must pay is constant attention to new knowledge as a vehicle for fine-tuning our individual and collective views of best practice. This is the path that other professions, such as medicine, have taken in order to maintain their professional prerogative, and we must take it, too. My fear is that if the professional groups in education fail to assume this responsibility squarely and openly, then we will find ourselves victims of the most onerous of legislative mandates (p. 245).

Those hostile to a research-based approach to educational practice like to imply that the insights of teachers and those of researchers conflict. Nothing could be farther from the truth. Take reading, for example. Teachers often do observe exactly what the research shows--that most of their children who are struggling with reading have trouble decoding words. In an address to the Reading Hall of Fame at the 1996 meeting of the International Reading Association, Isabel Beck (1996) illustrated this point by reviewing her own intellectual history (see Beck, 1998, for an archival version). She relates her surprise upon coming as an experienced teacher to the Learning Research and Development Center at the University of Pittsburgh and finding "that there were some people there (psychologists) who had not taught anyone to read, yet they were able to describe phenomena that I had observed in the course of teaching reading" (Beck, 1996, p. 5). In fact, what Beck was observing was the triangulation of two empirical approaches to the same issue--two perspectives on the same underlying reality. And she also came to appreciate how these two perspectives fit together: "What I knew were a number of whats--what some kids, and indeed adults, do in the early course of learning to read. And what the psychologists knew were some whys--why some novice readers might do what they do" (pp. 5-6).

Beck speculates on why the disputes about early reading instruction have dragged on so long without resolution and posits that it is due to the power of a particular kind of evidence--evidence from personal observation. The determination of whole language advocates is no doubt sustained because "people keep noticing the fact that some children or perhaps many children--in any event a subset of children--especially those who grow up in print-rich environments, don't seem to need much more of a boost in learning to read than to have their questions answered and to point things out to them in the course of dealing with books and various other authentic literacy acts" (Beck, 1996, p. 8). But Beck points out that it is equally true that proponents of the importance of decoding skills are also fueled by personal observation: "People keep noticing the fact that some children or perhaps many children--in any event a subset of children--don't seem to figure out the alphabetic principle, let alone some of the intricacies involved without having the system directly and systematically presented" (p. 8). But clearly we have lost sight of the basic fact that the two observations are not mutually exclusive--one doesn't negate the other. This is just the type of situation for which the scientific method was invented: a situation requiring a consensual view, triangulated across differing observations by different observers.

Teachers, like scientists, are ruthless pragmatists (Gersten & Dimino, 2001; Gersten, Chard, & Baker, 2000). They believe that some explanations and methods are better than others. They think there is a real world out there--a world in flux, obviously--but still one that is trackable by triangulating observations and observers. They believe that there are valid, if fallible, ways of finding out which educational practices are best. Teachers believe in a world that is predictable and controllable by manipulations that they use in their professional practice, just as scientists do. Researchers and educators are kindred spirits in their approach to knowledge, an important fact that can be used to forge a coalition to bring hard-won research knowledge to light in the classroom.

  • Adams, M. J. (1990). Beginning to read: Thinking and learning about print . Cambridge, MA: MIT Press.
  • Adler, J. E. (1998, January). Open minds and the argument from ignorance. Skeptical Inquirer , 22 (1), 41-44.
  • Anderson, C. A., & Anderson, K. B. (1996). Violent crime rate studies in philosophical context: A destructive testing approach to heat and Southern culture of violence effects. Journal of Personality and Social Psychology , 70 , 740-756.
  • Anderson, R. C., Hiebert, E. H., Scott, J., & Wilkinson, I. (1985). Becoming a nation of readers . Washington, D. C.: National Institute of Education.
  • Arter, A. and Jenkins, J. (1979). Differential diagnosis-prescriptive teaching: A critical appraisal, Review of Educational Research , 49 , 517-555.
  • Beck, A. R., & Pirovano, C. M. (1996). Facilitated communications' performance on a task of receptive language with children and youth with autism. Journal of Autism and Developmental Disorders , 26 , 497-512.
  • Beck, I. L. (1996, April). Discovering reading research: Why I didn't go to law school . Paper presented at the Reading Hall of Fame, International Reading Association, New Orleans.
  • Beck, I. (1998). Understanding beginning reading: A journey through teaching and research. In J. Osborn & F. Lehr (Eds.), Literacy for all: Issues in teaching and learning (pp. 11-31). New York: Guilford Press.
  • Berliner, D. C. (1987). Knowledge is power: A talk to teachers about a revolution in the teaching profession. In D. C. Berliner & B. V. Rosenshine (Eds.), Talks to teachers (pp. 3-33). New York: Random House.
  • Bjorklund, D. F. (1995). Children's thinking: Developmental function and individual differences (Second Edition) . Pacific Grove, CA: Brooks/Cole.
  • Block, C. C., & Pressley, M. (Eds.). (2002). Comprehension instruction: Research-based best practices . New York: Guilford Press.
  • Bronowski, J. (1956). Science and human values . New York: Harper & Row.
  • Bronowski, J. (1973). The ascent of man . Boston: Little, Brown.
  • Bronowski, J. (1977). A sense of the future . Cambridge: MIT Press.
  • Burgess, C. A., Kirsch, I., Shane, H., Niederauer, K., Graham, S., & Bacon, A. (1998). Facilitated communication as an ideomotor response. Psychological Science , 9 , 71-74.
  • Chard, D. J., & Osborn, J. (1999). Phonics and word recognition in early reading programs: Guidelines for accessibility. Learning Disabilities Research & Practice , 14 , 107-117.
  • Cooper, H. & Hedges, L. V. (Eds.), (1994). The handbook of research synthesis . New York: Russell Sage Foundation.
  • Cunningham, P. M., & Allington, R. L. (1994). Classrooms that work: They all can read and write . New York: HarperCollins.
  • Dawkins, R. (1998). Unweaving the rainbow . Boston: Houghton Mifflin.
  • Dennett, D. C. (1995). Darwin's dangerous idea: Evolution and the meanings of life . New York: Simon & Schuster.
  • Dennett, D. C. (1999/2000, Winter). Why getting it right matters. Free Inquiry , 20 (1), 40-43.
  • Ehri, L. C., Nunes, S., Stahl, S., & Willows, D. (2001). Systematic phonics instruction helps students learn to read: Evidence from the National Reading Panel's Meta-Analysis. Review of Educational Research , 71 , 393-447.
  • Foster, E. A., Jobling, M. A., Taylor, P. G., Donnelly, P., Deknijff, P., Renemieremet, J., Zerjal, T., & Tyler-Smith, C. (1998). Jefferson fathered slave's last child. Nature , 396 , 27-28.
  • Fraenkel, J. R., & Wallen, N. R. (1996). How to design and evaluate research in education (Third Edition). New York: McGraw-Hill.
  • Geertz, C. (1973). The interpretation of cultures . New York: Basic Books.
  • Geertz, C. (1979). From the native's point of view: On the nature of anthropological understanding. In P. Rabinow & W. Sullivan (Eds.), Interpretive social science (pp. 225-242). Berkeley: University of California Press.
  • Gersten, R. (2001). Sorting out the roles of research in the improvement of practice. Learning Disabilities: Research & Practice , 16 (1), 45-50.
  • Gersten, R., Chard, D., & Baker, S. (2000). Factors enhancing sustained use of research-based instructional practices. Journal of Learning Disabilities , 33 (5), 445-457.
  • Gersten, R., & Dimino, J. (2001). The realities of translating research into classroom practice. Learning Disabilities: Research & Practice , 16 (2), 120-130.
  • Gersten, R., Vaughn, S., Deshler, D., & Schiller, E. (1997).What we know about using research findings: Implications for improving special education practice. Journal of Learning Disabilities , 30 (5), 466-476.
  • Goswami, U. (1998). Cognition in children . Hove, England: Psychology Press.
  • Gross, P. R., Levitt, N., & Lewis, M. (1997). The flight from science and reason . New York: New York Academy of Science.
  • Hedges, L. V., & Olkin, I. (1985). Statistical Methods for Meta-Analysis . New York: Academic Press.
  • Holton, G., & Roller, D. (1958). Foundations of modern physical science . Reading, MA: Addison-Wesley.
  • Hudson, A., Melita, B., & Arnold, N. (1993). A case study assessing the validity of facilitated communication. Journal of Autism and Developmental Disorders , 23 , 165-173.
  • Hunter, J. E., & Schmidt, F. L. (1990). Methods of meta-analysis: Correcting error and bias in research findings . Newbury Park, CA: Sage.
  • Jacobson, J. W., Mulick, J. A., & Schwartz, A. A. (1995). A history of facilitated communication: Science, pseudoscience, and antiscience. American Psychologist , 50 , 750-765.
  • Kamil, M. L. (1995). Some alternatives to paradigm wars in literacy research. Journal of Reading Behavior , 27 , 243-261.
  • Kampwirth, R., and Bates, E. (1980). Modality preference and teaching method: A review of the research, Academic Therapy , 15 , 597-605.
  • Kavale, K. A., & Forness, S. R. (1995). The nature of learning disabilities: Critical elements of diagnosis and classification . Mahweh, NJ: Lawrence Erlbaum Associates.
  • Levin, J. R., & O'Donnell, A. M. (2000). What to do about educational research's credibility gaps? Issues in Education: Contributions from Educational Psychology , 5 , 1-87.
  • Liberman, A. M. (1999). The reading researcher and the reading teacher need the right theory of speech. Scientific Studies of Reading , 3 , 95-111.
  • Magee, B. (1985). Philosophy and the real world: An introduction to Karl Popper . LaSalle, IL: Open Court.
  • Mayer, R. E. (2000). What is the place of science in educational research? Educational Researcher , 29 (6), 38-39.
  • McNiff, J.,Lomax, P., & Whitehead, J. (1996). You and your action research project . London: Routledge.
  • Medawar, P. B. (1982). Pluto's republic . Oxford: Oxford University Press.
  • Medawar, P. B. (1984). The limits of science . New York: Harper & Row.
  • Medawar, P. B. (1990). The threat and the glory . New York: Harper Collins.
  • Moats, L. (1999). Teaching reading is rocket science . Washington, DC: American Federation of Teachers.
  • National Reading Panel: Reports of the Subgroups. (2000). Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction . Washington, DC.
  • Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology , 2 , 175-220.
  • Pearson, P. D. (1993). Teaching and learning to read: A research perspective. Language Arts , 70 , 502-511.
  • Pearson, P. D. (1999). A historically based review of preventing reading difficulties in young children. Reading Research Quarterly , 34 , 231-246.
  • Plotkin, D. (1996, June). Good news and bad news about breast cancer. Atlantic Monthly , 53-82.
  • Popper, K. R. (1972). Objective knowledge . Oxford: Oxford University Press.
  • Pressley, M. (1998). Reading instruction that works: The case for balanced teaching . New York: Guilford Press.
  • Pressley, M., Rankin, J., & Yokol, L. (1996). A survey of the instructional practices of outstanding primary-level literacy teachers. Elementary School Journal , 96 , 363-384.
  • Rayner, K. (1998). Eye movements in reading and information processing: 20 Years of research. Psychological Bulletin , 124 , 372-422.
  • Rayner, K., Foorman, B. R., Perfetti, C. A., Pesetsky, D., & Seidenberg, M. S. (2002, March). How should reading be taught? Scientific American , 286 (3), 84-91.
  • Reading Coherence Initiative. (1999). Understanding reading: What research says about how children learn to read . Austin, TX: Southwest Educational Development Laboratory.
  • Rosenthal, R. (1995). Writing meta-analytic reviews. Psychological Bulletin , 118 , 183-192.
  • Rosnow, R. L., & Rosenthal, R. (1989). Statistical procedures and the justification of knowledge in psychological science. American Psychologist , 44 , 1276-1284.
  • Shankweiler, D. (1999). Words to meaning. Scientific Studies of Reading , 3 , 113-127.
  • Share, D. L., & Stanovich, K. E. (1995). Cognitive processes in early reading development: Accommodating individual differences into a model of acquisition. Issues in Education: Contributions from Educational Psychology , 1 , 1-57.
  • Shavelson, R. J., & Towne, L. (Eds.) (2002). Scientific research in education . Washington, DC: National Academy Press.
  • Siegler, R. S. (1991). Children's thinking (Second Edition) . Englewood Cliffs, NJ: Prentice Hall.
  • Snow, C. E., Burns, M. S., & Griffin, P. (Eds.). (1998). Preventing reading difficulties in young children . Washington, DC: National Academy Press.
  • Snowling, M. (2000). Dyslexia (Second Edition) . Oxford: Blackwell.
  • Spear-Swerling, L., & Sternberg, R. J. (2001). What science offers teachers of reading. Learning Disabilities: Research & Practice , 16 (1), 51-57.
  • Stahl, S. (December, 1988). Is there evidence to support matching reading styles and initial reading methods? Phi Delta Kappan , 317-327.
  • Stanovich, K. E. (1993/1994). Romance and reality. The Reading Teacher , 47 (4), 280-291.
  • Stanovich, K. E. (2000). Progress in understanding reading: Scientific foundations and new frontiers . New York: Guilford Press.
  • Stanovich, K. E. (2001). How to think straight about psychology (Sixth Edition). Boston: Allyn & Bacon.
  • Stokes, D. E. (1997). Pasteur's quadrant: Basic science and technological innovation . Washington, DC: Brookings Institution Press.
  • Swanson, H. L. (1999). Interventions for students with learning disabilities: A meta-analysis of treatment outcomes . New York: Guilford Press.
  • Tarver, S. G., & Dawson, E. (1978). Modality preference and the teaching of reading: A review, Journal of Learning Disabilities , 11, 17-29.
  • Vaughn, S., & Dammann, J. E. (2001). Science and sanity in special education. Behavioral Disorders , 27, 21-29.
  • Warby, D. B., Greene, M. T., Higgins, K., & Lovitt, T. C. (1999). Suggestions for translating research into classroom practices. Intervention in School and Clinic , 34 (4), 205-211.
  • Wheeler, D. L., Jacobson, J. W., Paglieri, R. A., & Schwartz, A. A. (1993). An experimental assessment of facilitated communication. Mental Retardation , 31 , 49-60.
  • Wilkinson, L. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist , 54 , 595-604.
  • Wilson, E. O. (1998). Consilience: The unity of knowledge . New York: Knopf.

For additional copies of this document:

Contact the National Institute for Literacy at ED Pubs PO Box 1398, Jessup, Maryland 20794-1398

Phone 1-800-228-8813 Fax 301-430-1244 [email protected]

NICHD logo

Date Published: 2003 Date Posted: March 2010

Department of Education logo

  • Our Mission

How to Read and Interpret Research to Benefit Your Teaching Practice

Teachers can find helpful ideas in research articles and take a strategic approach to get the most out of what they’re reading.

Photo of teacher working at home

Have you read any education blogs, attended a conference session this summer, or gone to a back-to-school meeting so far where information on PowerPoint slides was supported with research like this: “Holland et al., 2023”? Perhaps, like me, you’ve wondered what to do with these citations or how to find and read the work cited. We want to improve our teaching practice and keep learning amid our busy schedules and responsibilities. When we find a sliver of time to look for the research article(s) being cited, how are we supposed to read, interpret, implement, and reflect on it in our practice? 

There has been much research over the past decade building on research-practice partnerships . Teachers and researchers should work collaboratively to improve student learning. Though researchers in higher education typically conduct formal research and publish their work in journal articles, it’s important for teachers to also see themselves as researchers. They engage in qualitative analysis while circulating the room to examine and interpret student work and demonstrate quantitative analysis when making predictions around student achievement data.

There are different sources of knowledge and timely questions to consider that education researchers can learn and take from teachers. So, what if teachers were better equipped to translate research findings from a journal article into improved practice relevant to their classroom’s immediate needs? I’ll offer some suggestions on how to answer this question.

Removing Barriers to New Information

For starters, research is crucial for education. It helps us learn and create new knowledge. Teachers learning how to translate research into practice can help contribute toward continuous improvement in schools. However, not all research is beneficial or easily applicable. While personal interests may lead researchers in a different direction, your classroom experience holds valuable expertise. Researchers should be viewed as allies, not sole authorities.

Additionally, paywalls prevent teachers from accessing valuable research articles that are often referenced in professional development. However, some sites, like Sage and JSTOR , offer open access journals where you can find research relevant to your classroom needs. Google Scholar is another helpful resource where you can plug in keywords like elementary math , achievement , small-group instruction , or diverse learners to find articles freely available as PDFs. Alternatively, you can use Elicit and get answers to specific questions. It can provide a list of relevant articles and summaries of their findings.

Approach research articles differently than other types of writing, as they aren’t intended for our specific audience but rather for academic researchers. Keep this in mind when selecting articles that align with your teaching vision, student demographic, and school environment.

Using behavioral and brain science research, I implemented the spacing effect . I used this strategy to include spaced fluency, partner practices, and spiral reviews (e.g., “do nows”) with an intentional selection of questions and tasks based on student work samples and formative/summative assessment data. It improved my students’ memory, long-term retention, and proficiency, so I didn’t take it too personally when some of them forgot procedures or symbols.

What You’ll Find in a Research Article

Certain elements are always included in a research article. The abstract gives a brief overview. Following that, the introduction typically explains the purpose and significance of the research—often through a theoretical framework and literature review. Other common sections of a research article may include methodology, results or findings, and discussion or conclusion.

The methodology section explains how the researchers answered their research question(s) to understand the topic. The results/findings section provides the answer(s) to the research question(s), while the discussion/conclusion section explains the importance and meaning of the results/findings and why it matters to readers and the field of education at large.

How to Process Information to Find What You’re Looking For

To avoid getting overwhelmed while reading research, take notes. Many articles are lengthy and filled with complex terminology and citations. Choose one relevant article at a time, and jot down important points or questions.

You could apply many strategies to read research, but here’s an idea that takes our time constraints and bandwidth as teachers into account:

  • First, read the title and full abstract, then scan and skim the introduction. You’ll be able to see if it’s relevant to your interests, needs, and whether you need to continue reading. 
  • After you’ve decided if the research is relevant to your classroom and professional development, jump straight to the discussion/conclusion section to see the “so what” about the research findings and how they could apply to your classroom. Review the findings/results section after for more details if needed.

Decipher the Details in the Data 

As a math, science, or English language arts teacher, you might come across figures, tables, or graphs that could spark ideas for your lessons. Some of these visuals and data may seem complex and difficult to understand. To make sense of them, take it slow and read through the notes and descriptions carefully.             

For example, researchers C. Kirabo Jackson and Alexey Makarin created a graph to show that middle school math teachers who had online access and support to use high-quality materials saw a positive impact on math test scores, especially when they used the materials for multiple lessons. The notes below the graph explain how the data was collected and which school districts were involved in the study.

Lastly, after reading the findings/results section, you’ll understand the gist of the research and if it’s applicable to your needs. Reading beyond these sections depends on your schedule and interests. It’s perfectly normal if it takes additional time to digest these sections.

When it comes to reading research, teachers don’t have to go it alone. School and district leaders can involve us in discussions about research findings and their practical implications for our school during professional learning community meetings or professional development sessions before the start of the school year. Even if only a few teachers participate in this process, sharing the main points with peers and the principal can have a significantly positive impact on improving direct instruction for students.

Introduction to Education Research

  • First Online: 29 November 2023

Cite this chapter

importance of applied research in education

  • Sharon K. Park 3 ,
  • Khanh-Van Le-Bucklin 4 &
  • Julie Youm 4  

335 Accesses

Educators rely on the discovery of new knowledge of teaching practices and frameworks to improve and evolve education for trainees. An important consideration that should be made when embarking on a career conducting education research is finding a scholarship niche. An education researcher can then develop the conceptual framework that describes the state of knowledge, realize gaps in understanding of the phenomenon or problem, and develop an outline for the methodological underpinnings of the research project. In response to Ernest Boyer’s seminal report, Priorities of the Professoriate , research was conducted about the criteria and decision processes for grants and publications. Six standards known as the Glassick’s criteria provide a tangible measure by which educators can assess the quality and structure of their education research—clear goals, adequate preparation, appropriate methods, significant results, effective presentation, and reflective critique. Ultimately, the promise of education research is to realize advances and innovation for learners that are informed by evidence-based knowledge and practices.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Boyer EL. Scholarship reconsidered: priorities of the professoriate. Princeton: Carnegie Foundation for the Advancement of Teaching; 1990.

Google Scholar  

Munoz-Najar Galvez S, Heiberger R, McFarland D. Paradigm wars revisited: a cartography of graduate research in the field of education (1980–2010). Am Educ Res J. 2020;57(2):612–52.

Article   Google Scholar  

Ringsted C, Hodges B, Scherpbier A. ‘The research compass’: an introduction to research in medical education: AMEE Guide no. 56. Med Teach. 2011;33(9):695–709.

Article   PubMed   Google Scholar  

Bordage G. Conceptual frameworks to illuminate and magnify. Med Educ. 2009;43(4):312–9.

Varpio L, Paradis E, Uijtdehaage S, Young M. The distinctions between theory, theoretical framework, and conceptual framework. Acad Med. 2020;95(7):989–94.

Ravitch SM, Riggins M. Reason & Rigor: how conceptual frameworks guide research. Thousand Oaks: Sage Publications; 2017.

Park YS, Zaidi Z, O'Brien BC. RIME foreword: what constitutes science in educational research? Applying rigor in our research approaches. Acad Med. 2020;95(11S):S1–5.

National Institute of Allergy and Infectious Diseases. Writing a winning application—You’re your niche. 2020a. https://www.niaid.nih.gov/grants-contracts/find-your-niche . Accessed 23 Jan 2022.

National Institute of Allergy and Infectious Diseases. Writing a winning application—conduct a self-assessment. 2020b. https://www.niaid.nih.gov/grants-contracts/winning-app-self-assessment . Accessed 23 Jan 2022.

Glassick CE, Huber MT, Maeroff GI. Scholarship assessed: evaluation of the professoriate. San Francisco: Jossey Bass; 1997.

Simpson D, Meurer L, Braza D. Meeting the scholarly project requirement-application of scholarship criteria beyond research. J Grad Med Educ. 2012;4(1):111–2. https://doi.org/10.4300/JGME-D-11-00310.1 .

Article   PubMed   PubMed Central   Google Scholar  

Fincher RME, Simpson DE, Mennin SP, Rosenfeld GC, Rothman A, McGrew MC et al. The council of academic societies task force on scholarship. Scholarship in teaching: an imperative for the 21st century. Academic Medicine. 2000;75(9):887–94.

Hutchings P, Shulman LS. The scholarship of teaching new elaborations and developments. Change. 1999;11–5.

Download references

Author information

Authors and affiliations.

School of Pharmacy, Notre Dame of Maryland University, Baltimore, MD, USA

Sharon K. Park

University of California, Irvine School of Medicine, Irvine, CA, USA

Khanh-Van Le-Bucklin & Julie Youm

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Sharon K. Park .

Editor information

Editors and affiliations.

Johns Hopkins University School of Medicine, Baltimore, MD, USA

April S. Fitzgerald

Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA

Gundula Bosch

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Park, S.K., Le-Bucklin, KV., Youm, J. (2023). Introduction to Education Research. In: Fitzgerald, A.S., Bosch, G. (eds) Education Scholarship in Healthcare. Springer, Cham. https://doi.org/10.1007/978-3-031-38534-6_2

Download citation

DOI : https://doi.org/10.1007/978-3-031-38534-6_2

Published : 29 November 2023

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-38533-9

Online ISBN : 978-3-031-38534-6

eBook Packages : Medicine Medicine (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

National Academies Press: OpenBook

Scientific Research in Education (2002)

Chapter: 1 introduction, 1 introduction.

Born of egalitarian instincts, the grand experiment of U.S. public education began over 200 years ago. The scope and complexity of its agenda is apparent:

to teach the fundamental skills of reading, writing, and arithmetic; to nurture critical thinking; to convey a general fund of knowledge; to develop creativity and aesthetic perception; to assist students in choosing and preparing for vocations in a highly complex economy; to inculcate ethical character and good citizenship; to develop physical and emotional well-being; and to nurture the ability, the intelligence, and the will to continue on with education as far as any particular individual wants to go (Cremin, 1990, p. 42).

The educational system is no less complex. Today the United States sends more than 45 million children to schools that are governed by 15,000 independent school districts in the 50 states (and territories); it boasts thousands of colleges and universities and myriad adult and informal learning centers. The nation takes pride in reaffirming the constitutional limitations on the federal role in education, yet recently has tentatively embraced the idea of national standards. The system is one of dualities: a national ethos with local control; commitment to excellence and aspiration to equality; and faith in tradition and appetite for innovation.

The context in which this system operates is also changing. The United States is no longer a manufacturing society in which people with little

formal education can find moderate- to high-paying jobs. It is now a service- and knowledge-driven economy in which high levels of literacy and numeracy are required of almost everyone to achieve a good standard of living (National Research Council, 1999a; Secretary’s Commission on Achieving Necessary Skills, 1991; Murnane and Levy, 1996; Judy and D’Amico, 1997; Packer, 1997). Moreover, to address the challenges of, for example, low-performing schools, the “achievement gap,” and language diversity, educators today require new knowledge to reengineer schools in effective ways.

To meet these new demands, rigorous, sustained, scientific research in education is needed. In today’s rapidly changing economic and technological environment, schooling cannot be improved by relying on folk wisdom about how students learn and how schools should be organized. No one would think of designing a rocket to the moon or wiping out a widespread disease by relying on untested hunches; likewise, one cannot expect to improve education without research.

Knowledge is needed on many topics, including: how to motivate children to succeed; how effective schools and classrooms are organized to foster learning; the roots of teenage alienation and violence; how human and economic resources can be used to support effective instruction; effective strategies for preparing teachers and school administrators; the interaction among what children learn in the context of their families, schools, colleges, and the media; the relationship between educational policy and the economic development of society; and the ways that the effects of schooling are moderated by culture and language. In order that society can learn how to improve its efforts to mount effective programs, rigorous evaluations of innovations must also be conducted. The education research community has produced important insights on many of these topics (we trace some of them in Chapter 2 ). However, in contrast to physics and other older sciences, many areas of education are relatively new domains for scientific study, and there is much work yet to do.

Everyone has opinions about schooling, because they were all once in school. But in this ever more complex world, in which educational problems tend to be portrayed with the urgency of national survival, there is (again) an understandable attraction to the rationality and disciplined style of science. Simply put, for some problems citizens, educators, administrators,

policy makers, and other concerned individuals want to hear about hard evidence, they want impartiality, and they want decisions to rest on reasonable, rigorous, and scientific deliberation. And how can the quality of science be judged? This is our topic.

To set the stage for this discussion, this chapter provides historical and philosophical background and describes how the current undertaking fits into that broader context.

HISTORICAL AND PHILOSOPHICAL CONTEXT

Education research in the United States is barely 100 years old, and its history is not a simple tale of progress. The study of education drew heavily on the emerging social sciences, which had found a place in research universities at the beginning of the twentieth century. That foothold was often tenuous, however, with intense debates about the essential character of these “sciences.” Many in academic circles sought to model the social sciences on the physical sciences, while others—regarding this as “physics envy”—insisted that broader accounts of the nature of science had to be adopted in order to encompass adequately the range of phenomena in these newer domains (Lagemann, 2000).

Education research began as a branch of psychology at a time when psychology was still a part of philosophy. In the first decade of the twentieth century, psychology was emerging as a distinct field, as were the budding fields of educational psychology, history of education, and educational administration. By the 1930s, subfields of work that centered on different subjects of the school curriculum—notably reading, mathematics, and social studies—had also emerged. As education research continued to develop new methods and questions and in response to developments in the social and behavioral sciences, research fields proliferated (Lagemann, 2000; Cronbach and Suppes, 1969).

From the beginning, the field has been plagued by skepticism concerning the value and validity of developing a “science of education.” This attitude was evident as long ago as the late nineteenth century, when universities began to establish departments and schools of education. A chorus of complaints arose from faculty in the arts and sciences concerning the inclusion of scholars intending to systematically study the organizational

and pedagogical aspects of schooling. Ellwood Patterson Cubberley, a school superintendent in San Diego who just before the end of the nineteenth century was appointed chair of the department of education (later the School of Education) at Stanford University, arrived on campus ready and eager to help improve education by generating studies of the history and current administration of the nation’s public schools. Despite his enthusiasm and extraordinary productivity, his colleagues refused to acknowledge that “the study of education could be validly considered either an art or a science.” On the opposite side of the country Paul Hanus, Harvard’s first scholar of education, faced similar skepticism. George Herbert Palmer liked to quip that when “Professor Hanus came to Cambridge, he bore the onus of his subject.” (quoted in Lagemann, 2000, p. 72). Indeed, a set of attitudes toward education research that one might call “anti-educationism” has been a constant to the present day.

Despite this skepticism, the enterprise grew apace. For example, by the end of the twentieth century, the American Educational Research Association (AERA) had well over 20,000 members (roughly 5,500 of whom report research as their primary professional responsibility), organized into 12 divisions (e.g., administration, curriculum, learning and instruction, teacher education), some with a number of subsections, and about 140 special interest groups (American Educational Research Association, 2000). This growth in the number of scholars has been notable because it occurred in the absence of a proportional increase in federal funding. And as a percentage of the total amount spent on public elementary and secondary education, the nation as a whole invested less than 0.1 percent in research (President’s Committee of Advisors on Science and Technology, 1997).

There are several reasons for the lack of public support for education research. Problems include research quality (Lagemann, 2000; Kaestle, 1993; Sroufe, 1997; Levin and O’Donnell, 1999), fragmentation of the effort (National Research Council, 1992), and oversimplified expectations about the role of research in education reform (National Research Council, 2001d). Another key problem has been the sharp divide between education research and scholarship and the practice of education in schools and other settings. This disconnect has several historic roots: researchers and practitioners have typically worked in different settings; most researchers

have been men, while most teachers have been women; and teacher education has typically relied on practical experience rather than research. Operating in different worlds, researchers and practitioners did not develop the kinds of cross fertilization that are necessary in fields where research and practice should develop reciprocally—medicine and agriculture faced similar problems in their early development (Lagemann, 2000; Mitchell and Haro, 1999).

The epistemology of education research—that is, understanding about its core nature as a scientific endeavor—has also evolved significantly since its early days (see Dewey [1929] for an insightful early treatment). Five dimensions are particularly relevant to this report: the emergence of refined models of human nature; progress in understanding how scientific knowledge accumulates; recognition that education is a contested field of study; new developments in research designs and methods; and increased understanding of the nature of scientific rigor or quality. We comment briefly on each below and expand on several of them in the remaining chapters.

Models of Human Nature

In the decades when scientific research in education was gathering momentum, the most prevalent “models of man” and of human social life were derived from the mechanistic, positivistic sciences and philosophy of the nineteenth and twentieth centuries. The most famous example—the focus of numerous theoretical and methodological battles—was B.F. Skinner’s behaviorism (Skinner, 1953/1965, 1972). Following the work of the logical positivist philosophers, who believed that talking about entities that were not available for direct inspection (such as thoughts, values, ideals, and beliefs) was literally meaningless, Skinner’s research assumed that human behavior could be explained completely in terms of observable causes— for example, through schedules of reinforcement and punishment. Although Skinner’s work laid the foundation for modern theories of behavior (see National Research Council, 2001b), the behaviorist paradigm excluded important phenomena from inquiry at the outset of the study. Today, it is recognized that many phenomena of interest across the domains of the social sciences and education research result from voluntary human actions (or from the unintended or aggregate consequences of such actions) even

though direct measurement of such phenomena is typically not possible. 1 Thus, research on human action must take into account individuals’ understandings, intentions, and values as well as their observable behavior (Phillips and Burbules, 2000; Phillips, 2000.)

The development of alternative perspectives on the nature of humans that are more inclusive than the once-dominant behaviorist perspective should be regarded as both highly promising and something of a cautionary tale for education research. The moral of the rise and at least partial fall of behaviorism warns the scientific community to resist the tendency to take a single model (whether behavioral, cognitive, or interpretive), derived in relation to a limited range of phenomena, and extrapolate it as appropriate across all the social and behavioral sciences. There is room in the mansion of science for more than one model, and also for the creative tension produced when rival models are deployed (see, for an example, Greeno et al., 1996).

Progress in Science

If appreciation for multiple perspectives on the nature of humans has enhanced efforts to develop scientific research, so has a better, more sophisticated awareness of what “progress” in science means and how it is achieved. Linear models of progress have been put aside in favor of more jagged ones. Mistakes are made as science moves forward. The process is not infallible (see Lakatos and Musgrave, 1970); science advances through professional criticism and self-correction. Indeed, we show in Chapter 2 that this jagged progression of scientific progress is typical across the range of physical and social sciences as well as education research.

A long history of the philosophy of science also teaches that there is no algorithm for scientific progress (and, consequently, we certainly do not attempt to offer one in this report). Despite its optimistic-sounding title, even Sir Karl Popper’s (1959) classic work, The Logic of Scientific Discovery , makes the point strongly that there is no logical process by which researchers

  

For example, car purchases—a result of human actions—are easily observable and trackable; however, the reasons that people purchase a particular brand at a particular time and in a particular place are not.

can make discoveries in the first place. Popper also argues that knowledge always remains conjectural and potentially revisable. Over time, erroneous theories and inaccurate findings are detected and eliminated, largely by the process of testing (seeking refutations) that Popper himself described (Popper, 1965; Newton-Smith, 1981).

Education—A Highly Contested Field

While knowledge in the physical and social sciences and education has accumulated over time, the highly contested nature of education has had an effect on the progress of scientific research (Lagemann, 1996). One reason education is highly contested is because values play a central role: people’s hopes and expectations for educating the nation’s young are integrally tied to their hopes and expectations about the direction of society and its development (Hirst and Peters, 1970; Dewey, 1916). Obviously, different people see these matters differently. As in other fields that have such a public character, social ideals inevitably influence the research that is done, the way it is framed and conducted, and the policies and practices that are based on research findings. And decisions about education are sometimes instituted with no scientific basis at all, but rather are derived directly from ideology or deeply held beliefs about social justice or the good of society in general.

A second reason that education is contested is that rarely, if ever, does an education intervention—one important focus of study in the broader domain of education research—have only one main effect. Both positive and negative unintended consequences are often important (Cronbach et al., 1980). Education interventions have costs—in money, time, and effort: making a judgment on the effectiveness of a treatment is complex and requires taking account of myriad factors.

In short, education research will inevitably reflect and have to face many different values, and it will as a consequence produce complex findings. Ultimately, policy makers and practicing educators will have to formulate specific policies and practices on the basis of values and practical wisdom as well as education research. Science-based education research will affect, but typically not solely determine, these policies and practices.

Research Design and Method

Research in education has been enhanced by the recent invention of methods: new observational techniques, new experimental designs, new methods of data gathering and analysis, and new software packages for managing and analyzing both quantitative and qualitative data. Rapid advances in computer technologies have also dramatically increased the capacity to store and analyze large data sets. As new methods are developed, they lead to the identification of new questions, and the investigation of these, in turn, can demand that new methods be devised. We illustrate this dynamic relationship between methods, theories, empirical findings, and problems in Chapter 2 and describe common designs and methods employed to address classes of research questions in Chapter 5 .

Scientific Evidence and Rigor

In thinking about the ways that a research conjecture or hypothesis may be supported by evidence, many philosophers of science have found it fruitful to adopt a term that was featured in John Dewey’s (1938) treatise, Logic: The Theory of Inquiry (see, e.g., Phillips and Burbules, 2000). Dewey wrote of warrants for making assertions or knowledge claims. In science, measurements and experimental results, observational or interview data, and mathematical and logical analysis all can be part of the warrant—or case—that supports a theory, hypothesis, or judgment. However, warrants are always revocable depending on the findings of subsequent inquiry. Beliefs that are strongly warranted or supported at one time (e.g., the geocentric model of the solar system) may later need to be abandoned (for a heliocentric model). Evidence that is regarded as authoritative at one time (e.g., ice ages are caused by the eccentricity of the Earth’s orbit) can be shown later to be faulty (see Chapter 3 ). Science progresses both by advancing new theories or hypotheses and by eliminating theories, hypotheses, or previously accepted facts that have been refuted by newly acquired evidence judged to be definitive.

To make progress possible, then, theories, hypotheses, or conjectures must be stated in clear, unambiguous, and empirically testable terms. Evidence must be linked to them through a clear chain of reasoning. Moreover, the community of inquirers must be, in Karl Popper’s expres-

sion, “open societies” that encourage the free flow of critical comment. Researchers have an obligation to avoid seeking only such evidence that apparently supports their favored hypotheses; they also must seek evidence that is incompatible with these hypotheses even if such evidence, when found, would refute their ideas. Thus, it is the scientific community that enables scientific progress, not, as Nobel Prize-winning physicist Polykarp Kusch once declared, adherence to any one scientific method (Mills, 2000 [emphasis added]). We emphasize this notion of community in the scientific enterprise throughout this report.

These points about the nature of evidence constitute the essence of our account of rigor in inquiry; these ideas are fleshed out in the rest of this report. Importantly, our vision of scientific quality and rigor applies to the two forms of education research that have traditionally been labeled “quantitative” and “qualitative,” as well as to two forms of research that have been labeled “basic” and “applied.” These dichotomies have historically formed fault lines within and outside academia. As our brief discussion of the emergence of schools of education suggests, the perceived hierarchy of basic or “pure” science versus its messier cousin—applied research—has isolated the field of education research from other sciences. Similarly, sharp distinctions between quantitative and qualitative inquiry have divided the field. In particular, the current trend of schools of education to favor qualitative methods, often at the expense of quantitative methods, has invited criticism. Real problems stem from these “either/or” kinds of preferences, and we believe that both categorizations are neither well defined nor constructive. Thus, beyond a brief discussion that follows, we do not dwell on them in the report.

It is common to see quantitative and qualitative methods described as being fundamentally different modes of inquiry—even as being different paradigms embodying quite different epistemologies (Howe, 1988; Phillips, 1987). We regard this view as mistaken. Because we see quantitative and qualitative scientific inquiry as being epistemologically quite similar (King, Keohane, and Verba, 1994; Howe and Eisenhart, 1990), and as we recognize that both can be pursued rigorously, we do not distinguish between them as being different forms of inquiry. We believe the distinction is outmoded, and it does not map neatly in a one-to-one fashion onto any group or groupings of disciplines.

We also believe the distinction between basic and applied science has outlived its usefulness. This distinction often served to denigrate applied work (into which category education research was usually placed). But as Stokes (1997) in Pasteur’s Quadrant made clear, great scientific work has often been inspired by the desire to solve a pressing practical problem— much of the cutting-edge work of the scientist who inspired the book’s title had this origin. What makes research scientific is not the motive for carrying it out, but the manner in which it is carried out.

Finally, it is important to note that the question of what constitutes scientific rigor and quality has been the topic of much debate within the education research community itself since the nineteenth century. Two extreme views in the field’s complex history are worthy of brief elaboration. First, some extreme “postmodernists” have questioned whether there is any value in scientific evidence in education whatsoever (see the discussion of these issues in Gross, Levitt, and Lewis, 1997). At the other end of the spectrum, there are those who would define scientific research in education quite narrowly, suggesting that it is only quantitative measures and tight controls that unambiguously define science (see, e.g., Finn, 2001). We do not believe that either view is constructive, and in our estimation they have both compounded the “awful reputation” (Kaestle, 1993) of education research and diminished its promise.

PUBLIC AND PROFESSIONAL INTEREST IN EDUCATION RESEARCH

While federal funding for education research has waxed and (mostly) waned, the federal government has been clear and consistent in its call for scientific research into education. The Cooperative Research Act of 1954 first authorized the then Office of Education to fund education research (National Research Council, 1992). The National Institute of Education (NIE) was created in 1971 to provide “leadership in the conduct and support of scientific inquiry into education” (General Education Provisions Act, Sec. 405; cited in National Research Council, 1992). Likewise, as NIE was incorporated into the U.S. Office of Educational Research and Improvement (OERI), the quest for the scientific conduct of education research was front and center (Department of Education Organization Act, 1979; see National Research Council, 1992).

The federal government has not been alone in calling for scientific research into education. This call has been echoed in a series of reports and recommendations from the National Academies’ research arm, the National Research Council (NRC). In 1958, the NRC’s report, A Proposed Organization for Research in Education, recommended establishing a research organization for the advancement and improvement of education. A 1977 report, Fundamental Research and the Process of Education , called for fundamental research about educational processes. A 1986 report, Creating a Center for Education Statistics : A Time for Action , led to what many regard as the successful overhaul of the federal education statistical agency. And in the 1992 report, Research and Education Reform: Roles for the Office of Educational Research and Improvement , the NRC called for a complete overhaul of the federal research agency, criticizing its focus on “quick solutions to poorly understood problems” (National Research Council, 1992, p. viii). The report recommended creating an infrastructure that would support and foster scientific research into learning and cognitive processes underlying education, curriculum, teaching, and education reform.

What, then, warrants another NRC report on scientific research in education? First, as we argue above, the nation’s commitment to improve the education of all children requires continuing efforts to improve its research capacity. Questions concerning how to do this are currently being debated as Congress considers ways to organize a federal education research agency. Indeed, H.R. 4875—the so-called “Castle bill” to reauthorize OERI—has provided us with an opportunity to revisit historic questions about the “science of education” in a modern policy context. This bill includes definitions—crafted in the political milieu—of scientific concepts to be applied to education research, reflecting yet again a skepticism about the quality of current scholarship. (We discuss these definitions briefly in Chapter 6 .) Our report is specifically intended to provide an articulation of the core nature of scientific inquiry in education from the research community.

The rapid growth of the education research community in recent years has resulted in the production of many studies, articles, journal publications, books and opinion pieces associated with academics, but that are not necessarily scientific in character. Moreover, the field of education researchers is itself a diverse mix of professionals with varying levels and types of research training, and they often bring quite different orientations

to their work. These multiple perspectives are in many ways indicative of the health of the enterprise, but they also render the development of a cohesive community with self-regulating norms difficult (Lagemann, 2000). In this spirit, we intend this report to provide a balanced account of scientific quality and rigor that sparks self-reflection within the research community about its roles and responsibilities for promoting scientific quality and advancing scientific understanding.

Finally, perhaps more than ever before, citizens, business leaders, politicians, and educators want credible information on which to evaluate and guide today’s reform and tomorrow’s education for all students. Driven by the performance goals inherent in standards-based reforms, they seek a working consensus on the challenges confronting education, on what works in what contexts and what doesn’t, and on why what works does work. Simply put, they seek trustworthy, scientific evidence on which to base decisions about education.

COMMITTEE CHARGE AND APPROACH

The committee was assembled in the fall of 2000 and was asked to complete its report by the fall of 2001. The charge from the committee’s sponsor, the National Educational Policy and Priorities Board of the U.S. Department of Education, was as follows:

This study will review and synthesize recent literature on the science and practice of scientific education research and consider how to support high quality science in a federal education research agency.

To organize its deliberations, the committee translated this mandate into three framing questions:

What are the principles of scientific quality in education research?

To address this question, the committee considered how the purposes, norms, methods, and traditions of scientific inquiry translated in the study of education. The committee also considered what scientific quality meant, both in individual research projects and in programs of research, to better

understand how knowledge could be organized, synthesized, and generalized. Furthermore, we sought to understand how scientific education research is similar to, and different from, other scientific endeavors.

In approaching this question, we recognize that existing education research has suffered from uneven quality. This statement is not very startling, because the same could be said about virtually every area of scientific research. Although it is clear that the reputation of education research is quite poor (Kaestle, 1993; Sroufe, 1997; H.R. 4875), we do not believe it is productive to attempt to catalogue “bad research.” Instead, we have found it useful to focus on constructive questions: How much good research has been produced? Why isn’t there more good research? How could more good research be generated? We address these kinds of questions in the report.

How can a federal research agency promote and protect scientific quality in the education research it supports?

The committee did not conduct an evaluation of OERI. Rather, the committee approached the general question of the federal role from the perspective of scientific quality and rigor. We sought to identify the key design principles for a federal agency charged with fostering the scientific integrity of the research it funds and with promoting the accumulation of science-based knowledge over time. Among the issues the committee explored were how research quality is affected by internal infrastructure mechanisms, such as peer review, as well as external forces, such as political influence and fiscal support, and how the federal role can build the capacity of the field to do high-quality scientific work.

Here again, our approach is constructive and forward looking. We attempt to strike a balance between understanding the realities of the federal bureaucracy and the history of an education research agency within it while avoiding the detailed prescriptions of previous and current proposals to reform the existing federal role. We hope to make a unique contribution by focusing on “first principles” that form the core of scientific education research at the federal level and providing guidance about how these principles might be implemented in practice. Some of our suggestions are already in place; some are not. Some will be easy to implement; others will

be more difficult. Our intent is to provide a set of principles that can serve as a guidepost for improvement over time.

How can research-based knowledge in education accumulate?

The committee believes that rigor in individual scientific investigations and a strong federal infrastructure for supporting such work are required for research in education to generate and nurture a robust knowledge base. Thus, in addressing this question, we focused on mechanisms that support the accumulation of knowledge from science-based education research—the organization and synthesis of knowledge generated from multiple investigations. The committee considered the roles of the professional research community, the practitioner communities, and the federal government. Since we view the accumulation of scientific knowledge as the ultimate goal of research, this issue weaves throughout the report.

Assumptions

Taking our cue from much of the historical and philosophical context we describe in this chapter, we make five core assumptions in approaching our work.

First, although science is often perceived as embodying a concise, unified view of research, the history of scientific inquiry attests to the fact there is no one method or process that unambiguously defines science. The committee has therefore taken an inclusive view of “the science of education” or “the educational sciences” in its work. This broad view, however, should not be misinterpreted to suggest “anything goes.” Indeed, the primary purpose of this report is to provide guidance for what constitutes rigorous scientific research in education. Thus, we identify a set of principles that apply to physical and social science research and to science-based education research ( Chapter 3 ). In conjunction with a set of features that characterize education ( Chapter 4 ), these principles help define the domain of scientific research in education, roughly delineating what is in the domain and what is not. We argue that education research, like research in the social, biological, and physical realms, faces—as a final “court of appeal”— the test of conceptual and empirical adequacy over time. An educational

hypothesis or conjecture must be judged in the light of the best array of relevant qualitative or quantitative data that can be garnered. If a hypothesis is insulated from such testing, then it cannot be considered as falling within the ambit of science.

A second assumption is that many scientific studies in education and other fields will not pan out. Research is like oil exploration—there are, on average, many dry holes for every successful well. This is not because initial decisions on where to dig were necessarily misguided. Competent oil explorers, like competent scientists, presumably used the best information available to conduct their work. Dry holes are found because there is considerable uncertainty in exploration of any kind. Sometimes exploration companies gain sufficient knowledge from a series of dry holes in an area to close it down. And in many cases, failure to find wells can shed light on why apparently productive holes turned out to be dry; in other words, the process of failing to make a grand discovery can itself be very instructive. Other times they doggedly pursue an area because the science suggests there is still a reasonable chance of success. Scientific progress advances in much the same way, as we describe in Chapter 2 .

Third, we assume that it is possible to describe the physical and social world scientifically so that, for example, multiple observers can agree on what they see. Consequently, we reject the postmodernist school of thought when it posits that social science research can never generate objective or trustworthy knowledge. 2 However, we simultaneously reject research that relies solely on the narrow tenets of behaviorism/positivism (see above) (National Research Council, 2001b) because we believe its view of human nature is too simplistic.

Fourth, the committee’s focus on the scientific underpinnings of research in education does not reflect a simplistic notion that scientific quality alone will improve the use of such research in school improvement efforts. Scientific quality and rigor are necessary, but not sufficient, conditions for improving the overall value of education research. There are major issues related to, for example, how the research enterprise should be

  

This description applies to an extreme epistemological perspective that questions the rationality of the scientific enterprise altogether, and instead believes that all knowledge is based on sociological factors like power, influence, and economic factors (Phillips and Burbules, 2000).

organized at the federal and local levels, how it should and can be connected to policy and practice (National Research Council, 1999d), and the nature of scientific knowledge in education (Weiss, 1999; Murnane and Nelson, 1984). Throughout this report, we treat these complementary issues with varying degrees of depth depending on their proximity to our focus on the scientific nature of the field. Indeed, over the course of our deliberations, we have become aware of several complementary efforts focused on improving education research (e.g., NRC’s Strategic Education Research Partnership, RAND panels, Education Quality Institute, Interagency Education Research Initiative, and National Academy of Education-Social Science Research Council Committee on Education Research).

Finally, and critically, the committee believes that scientific research in education is a form of scholarship that can uniquely contribute to understanding and improving education, especially when integrated with other approaches to studying human endeavors. For example, historical, philosophical, and literary scholarship can and should inform important questions of purpose and direction in education. Education is influenced by human ideals, ideologies, and judgments of value, and these things need to be subjected to rigorous—scientific and otherwise—examination.

Structure of Report

The remainder of this report moves from the general to the specific. We begin by describing the commonalities shared across all scientific endeavors, including education research. We then take up some of the specifics of education research by characterizing the nature of education and of studying it scientifically; describing a sampling of trusted research designs used to address key questions; and providing guidance on how a federal education research agency could best support high quality science. A description of the specific contents of each chapter follows.

In Chapter 2 we address the global question of whether scientific inquiry in education has generated useful insights for policy and practice. We describe and analyze several lines of work, both inside and outside of education, to compare the accumulation of knowledge in education to that of other fields. In doing so, we provide “existence proofs” of the

accumulation of knowledge in education and show that its progression is similar in many ways to other fields.

In Chapter 3 we provide a set of guiding principles that undergird all scientific endeavors. We argue that at its core, scientific inquiry in education is the same as in all other scientific disciplines and fields and provide examples from a range of fields to illustrate this common set of principles.

In Chapter 4 we describe how the unique set of features that characterize education shape the guiding principles of science in education research. We argue that it is this interaction between the principles of science and the features of education that makes scientific research in education specialized. We also describe some aspects of education research as a profession to further illuminate its character.

In Chapter 5 , integrating our principles of science ( Chapter 3 ) and the features of education ( Chapter 4 ), we then take up the topic of the design of scientific education research. Recognizing that design must go hand in hand with the problem investigated, we examine education research design (and provide several examples) across three common types of research questions: What is happening? Is there a systematic effect? and How or why is it happening?

Finally, in Chapter 6 we offer a set of design principles for a federal education research agency charged with supporting the kind of scientific research in education we describe in this report. We argue that developing a strong scientific culture is the key to a successful agency and that all education stakeholders have a role to play in it.

Researchers, historians, and philosophers of science have debated the nature of scientific research in education for more than 100 years. Recent enthusiasm for "evidence-based" policy and practice in education—now codified in the federal law that authorizes the bulk of elementary and secondary education programs—have brought a new sense of urgency to understanding the ways in which the basic tenets of science manifest in the study of teaching, learning, and schooling.

Scientific Research in Education describes the similarities and differences between scientific inquiry in education and scientific inquiry in other fields and disciplines and provides a number of examples to illustrate these ideas. Its main argument is that all scientific endeavors share a common set of principles, and that each field—including education research—develops a specialization that accounts for the particulars of what is being studied. The book also provides suggestions for how the federal government can best support high-quality scientific research in education.

READ FREE ONLINE

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

  • Find My Rep

You are here

What are the benefits of educational research for teachers.

Ask an Expert Rebecca Austin Researching Primary Education

Cultivating a research-based approach to developing your practice provides evidence to effect change in your teaching, your classroom, your school, and beyond. Rebecca Austin, author of Researching Primary Education  and Senior Lecturer at the School of Teacher Education and Development at Canterbury Christchurch University, highlights what the benefits are of research to your practice…

In the context of the debate about what works and why, there is a wide range of benefits to researching your own practice, whether directly feeding into improvement through action research or, more broadly, gaining understanding and knowledge on themes of interest and relevance. This is why research is embedded into initial teacher education. As research becomes embedded in your practice you can gain a range of benefits. Research can:

  • clarify purposes, processes and priorities when introducing change – for example, to  curriculum, pedagogy or assessment  
  • develop your agency, influence, self-efficacy and voice within your own school and  more widely within the profession.

Each of these can involve investigation using evidence from your own setting, along with wider research evidence. 

Chapter Icon

  • Site search

CBT Supervision

The ABC of CBT

CBT for Beginners

CBT Values and Ethics

Reflection in CBT

CBT for Older People

Overcoming Obstacles in CBT

The CBT Handbook

CBT for Personality Disorders

CBMCS Multicultural Training Program

CBMCS Multicultural Reader

CBDNA Journal: Research & Review

An Introduction to CBT Research

CBT for Common Trauma Responses

Person-centred Therapy and CBT

Low-intensity CBT Skills and Interventions

CBT for Depression: An Integrated Approach

CBT with Children, Young People and Families

CBT for Worry and Generalised Anxiety Disorder

Action Research

Journal of Research in Nursing

Product Type plus Created with Sketch. minus Created with Sketch.

  • Textbook (33) Apply Textbook filter
  • Journal (13) Apply Journal filter
  • Academic Book (5) Apply Academic Book filter
  • Professional Book (4) Apply Professional Book filter
  • Reference Book (4) Apply Reference Book filter

Disciplines plus Created with Sketch. minus Created with Sketch.

  • Education (31) Apply Education filter
  • Counselling and Psychotherapy (General) (19) Apply Counselling and Psychotherapy (General) filter
  • Research Methods & Evaluation (General) (18) Apply Research Methods & Evaluation (General) filter
  • Nursing (7) Apply Nursing filter
  • Public Health (4) Apply Public Health filter
  • Psychology (General) (3) Apply Psychology (General) filter
  • Social Work & Social Policy (General) (3) Apply Social Work & Social Policy (General) filter
  • Clinical Medicine (3) Apply Clinical Medicine filter
  • Anthropology & Archaeology (General) (1) Apply Anthropology & Archaeology (General) filter
  • Arts & Humanities (General) (1) Apply Arts & Humanities (General) filter
  • History (General) (1) Apply History (General) filter
  • Business & Management (General) (1) Apply Business & Management (General) filter
  • Communication and Media Studies (General) (1) Apply Communication and Media Studies (General) filter
  • Cultural Studies (General) (1) Apply Cultural Studies (General) filter
  • Economics & Development Studies (General) (1) Apply Economics & Development Studies (General) filter
  • Life & Biomedical Sciences (1) Apply Life & Biomedical Sciences filter
  • Politics & International Relations (1) Apply Politics & International Relations filter
  • Study Skills (General) (1) Apply Study Skills (General) filter
  • Other Health Specialties (1) Apply Other Health Specialties filter

Status plus Created with Sketch. minus Created with Sketch.

  • Published (44) Apply Published filter
  • Forthcoming (2) Apply Forthcoming filter
  • Tools and Resources
  • Customer Services
  • Original Language Spotlight
  • Alternative and Non-formal Education 
  • Cognition, Emotion, and Learning
  • Curriculum and Pedagogy
  • Education and Society
  • Education, Change, and Development
  • Education, Cultures, and Ethnicities
  • Education, Gender, and Sexualities
  • Education, Health, and Social Services
  • Educational Administration and Leadership
  • Educational History
  • Educational Politics and Policy
  • Educational Purposes and Ideals
  • Educational Systems
  • Educational Theories and Philosophies
  • Globalization, Economics, and Education
  • Languages and Literacies
  • Professional Learning and Development
  • Research and Assessment Methods
  • Technology and Education
  • Share This Facebook LinkedIn Twitter

Article contents

Applications of cognitive neuroscience in educational research.

  • Bert De Smedt Bert De Smedt KU Leuven, University of Leuven
  • https://doi.org/10.1093/acrefore/9780190264093.013.69
  • Published online: 24 May 2018

The application of neuroscience to educational research remains an area of much debate. While some scholars have argued that such applications are not possible (and will never be possible), others have been more optimistic and suggest that these are possible, albeit under certain conditions, for example when one aims to understand very basic cognitive processes. Concrete examples of these applications are increasing in the emerging interdisciplinary field of mind, brain, and education or educational neuroscience, which posits itself at the intersection of cognitive neuroscience, psychology, and educational research. From a methodological point of view, cognitive neuroscience can be applied to (some types of) educational research, as it offers a toolbox to investigate specific types of educational research questions. Promising applications of cognitive neuroscience to educational research include comprehending the origins of atypical development, understanding the biological processes that play a role when learning school-relevant skills, predicting educational outcomes, generating predictions to be tested in educational research, and undertaking biological interventions. The challenges of applying cognitive neuroscience deal with ecological validity, the scope of a biological explanation, and the potential emergence of neuromyths.

  • educational neuroscience
  • neuro-education
  • mind, brain, and education
  • cognitive neuroscience
  • measurement

Introduction

Connections between neuroscience and educational research have gained traction since the late 1990s, as is exemplified in a systematic increase in the number of academic publications, with a specifically steep acceleration of publications from 2005 (Howard-Jones, 2014a ). There is now an increasing number of academic journals (e.g., Mind, Brain, and Education , Trends in Neuroscience and Education , Educational Neuroscience ), textbooks (Mareschal, Butterworth, & Tolmie, 2014 ), master programs in education departments (e.g., Harvard, Bristol, London, Leiden), funding initiatives (e.g., Welcome Trust Education and Neuroscience funding scheme), and scientific societies, such as the International Mind Brain and Education Society (IMBES) and special interest groups on neuroscience and education of the European Association for Research on Learning and Instruction (EARLI) and American Educational Research Association (AERA), oriented at this new interdisciplinary research field that is termed as “Mind, Brain and Education” (e.g., Fischer, 2009 ), “Educational Neuroscience” (e.g., McCandliss, 2010 ), or “Neuro-education” (e.g., Howard-Jones, 2010 ). These terms are used interchangeably and they all represent “a collaborative attempt to build methodological and theoretical bridges between cognitive neuroscience, cognitive psychology and educational practice without imposing a knowledge hierarchy” (Howard-Jones et al., 2016 , p. 625).

Despite this enthusiasm, the application of neuroscience to educational research remains an area of much debate. In the late nineties, Bruer ( 1997 ) posited that this application was “a bridge too far,” that the distance between education and neuroscience was too wide to be bridged, because there was (at that time) not enough knowledge to apply neuroscience to education. Instead, he argued, (cognitive) psychology should act as a scaffold to bridge the gap between neuroscience and education, an issue that is largely acknowledged by researchers who apply (cognitive) neuroscience to education. More recently, this skepticism was revisited and even culminated in the claim that neuroscience cannot affect instructional design and that it is unlikely to improve teaching in the future, hence that it is redundant to education and educational research (e.g., Bowers, 2016 ; Smeyers, 2016 ). While this latter group of scholars has argued that such applications are not possible (and will never be possible), the educational neuroscience field (Howard-Jones et al., 2016 ) suggests that connections between (cognitive) neuroscience and education are possible, albeit under certain conditions, for example when one aims to understand very basic cognitive processes.

It is important to clarify that neuroscience spans a very wide range of sub-disciplines, ranging from molecular and cellular science, which tries to understand the chemistry of neuronal function, to cognitive neuroscience, which studies the brain mechanisms underlying human behavior and cognition (Squire et al., 2013 ). Not all these branches of neuroscience have (and can have) connections with educational research. The application is possible from only one sub-discipline, cognitive neuroscience (see Ward, 2006 , for an introduction), and the “neuroscience” in Educational Neuroscience or Mind, Brain, and Education exclusively refers to cognitive neuroscience (Howard-Jones et al., 2016 ). Likewise, it is important to point out that the application to education does not cover all areas of educational research, but is limited to certain subfields, particularly those that align with a (post)positivist research paradigm.

A brief primer is presented on contemporary brain imaging methods in cognitive neuroscience that are the most relevant to educational research and when and how these methods should be applied. Promising applications, which include creating causal models of atypical development, predicting educational outcomes via biological data, understanding learning at the biological level, generating predictions for educational research, and determining effects of biological interventions, are discussed. The challenges that are attached to the application of cognitive neuroscience to educational research are explored.

A Brief Primer on Cognitive Neuroscience Methods

The most widely used noninvasive brain imaging methods that are relevant for educational research are (functional) Magnetic Resonance Imaging or (f)MRI and electroencephalography or EEG (Dick, Lloyd-Fox, Blasi, Elwell, & Mills, 2014 for a general introduction on cognitive neuroscience methods that are relevant for educational research; see also Ward, 2006 ); these methods have also been very popular in the field of psychology (e.g., Cacioppo, Berntson, & Nusbaum, 2008 ). These methods are noninvasive as they are not dependent on radioactive agents and they do not induce or depend on brain lesions. Sometimes, psychophysiological methods, such as eye movement data, heart rate, or skin conductance are referred to as neuroscience methods. These methods indeed provide measures of the nervous system, yet they do not directly tap into the brain and therefore are not considered here.

Magnetic Resonance Imaging

Magnetic resonance imaging (MRI) relies on the magnetic properties of hydrogen atoms in brain tissue to visualize brain structure and in blood to investigate brain function (Ward, 2006 ). These properties are investigated by applying very powerful magnetic fields through the MRI-scanner. This requires participants to lie very still—they are not allowed to move more than a few millimeters—in a noisy environment, which is quite different from what is happening in the classroom. This method can investigate the structure of the brain—that is, the gray matter (cell bodies and synapses of the neurons), by means of voxel-based morphometry, or white matter (myelinated axons, also called tracts or fibers, which conduct information from cell bodies of a neuron away to another neuron), by means of diffusion tensor imaging or DTI. Common research questions include how the size of particular brain structures or white matter connections are related to individual differences in performance (e.g., in language see Richardson & Price, 2009 ), how brain structures differ between individuals with and without neurodevelopmental disorders (e.g., in ADHD, Norman et al., 2016 ), and how brain structures change as a result of education or specific interventions. For example, Keller and Just ( 2009 ) demonstrated by means of DTI that 100 hours of intensive remedial instruction in reading changed white matter tracts in the brains of poor readers. Supekar et al. ( 2013 ) showed that the size of the grey matter of the hippocampus, a structure that is relevant for the consolidation of facts in memory, predicted the learning gains of a one-on-one math tutoring intervention.

Functional MRI, one of the most often used techniques in cognitive neuroscience, is a specific type of MRI that can be used to investigate brain function. More specifically, it indirectly assesses brain activity through the measurement of changes in the oxygen level in the cerebral blood while participants are typically doing a specific cognitive task in the scanner. The technique rests on the observation that neuronal activity and cerebral blood flow are tightly coupled, such that increases in the oxygen level in cerebral blood are the result of the vascular system’s response to increases in brain activity. This allows one to investigate where in the brain a particular (cognitive) process is taking place. For example, these studies have revealed the brain regions that are implicated in language and reading (e.g., Price, 2012 ), mathematics (e.g., Menon, 2015 ), and more general processes, such as executive functions (e.g., Bunge & Souza, 2009 ). While many of the earlier work limited its focus to localize brain activity in one or a few brain areas, it has become increasingly clear that the brain constitutes a mixture of highly functionally interconnected systems or large-scale networks, which cooperate to bring on complex cognitive abilities (Bressler & Menon, 2010 ), and these abilities are central in educational research. Functional brain activity studies have therefore expanded their attention to understand how brain regions interact over time, and for this reason, network approaches have become increasingly useful in fMRI research (e.g., Bressler & Menon, 2010 ). The practical constraints of the MRI environment (e.g., the fact that no movement is possible as well as the presence of loud noise) obviously limit the type of tasks that participants can perform in the scanner, even though a number of studies with complex tasks, such as video game play (Anderson et al., 2011 ) and even face-to-face interactions (Redcay et al., 2010 ), are now possible.

Electroencephalography (EEG)

This method directly measures the electrical activity of the brain, which constitutes the information transfer from one neuron to the other. It is important to point out that EEG is not measuring activity in a single neuron—this can be done only via single-cell recording techniques, which cannot be applied to human participants. Instead, EEG measures the synchronized activity of thousands of neurons at the same time. The acquisition of EEG data involves mounting a cap of electrodes on the head of a participant. The participant has to perform a very basic cognitive task, during which these electrodes measure brain activity. Most often, the brain activity in large ensembles of neurons in response to a particular stimulus, the so-called event-related activity or event-related potential (ERP), is measured. This is registered on a very accurate temporal scale, which makes this method particularly useful to investigate when a particular type of process takes place (high temporal resolution). As this activity is measured on the surface of the skull, it is, however, difficult to precisely know where this brain activity is originating from (low spatial resolution). In order to reliably estimate the brain response to a stimulus, researchers typically need a large number of stimuli of a particular type; thus, these ERP studies typically include more than a few dozen stimuli per type. ERPs are mainly used to study very fast cognitive processes, such as attention or perception, which are difficult to capture by behavioral data alone. For example, Morgan-Short, Steinhauer, Sanz, and Ullman ( 2012 ) used ERPs to investigate the differential effects of explicit language education (grammar-focused classroom setting) versus implicit education (immersion setting) on how the brain (quickly) processes syntactic information. As has been the case in MRI research, there is also increasing interest in using EEG methods to understand how functional brain networks are formed and interact with each other during complex cognitive tasks. Therefore, the study of brain oscillations (i.e., oscillations in various frequencies of the continuous EEG signal, also known as brain waves or rhythms, which are assumed to indicate knowledge representation as well as knowledge transfer between different brain areas), as well as the synchronicity in these oscillations in different brain regions, has become increasingly popular (Klimesch, Schack, & Sauseng, 2005 ). For example, these EEG oscillations have been related to cognitive load, and these EEG measures can be used in educational research as online continuous measures to assess cognitive load during learning (Antonenko, Paas, Grabner, & van Gog, 2010 ). Technological advances in EEG equipment have resulted in the availability of wireless EEG systems, which are particularly promising for educational research as this allows one to collect data in more ecologically valid settings and in multiple participants at the same time. For example, Dikker et al. ( 2017 ) recently investigated group interactions in the classroom and showed that the synchronization in brain activity patterns between different students predicted their classroom engagement, showing that more synchrony in brain activity patterns between students was related to higher class engagement.

What is common to these two categories of imaging methods is that they both require a solid cognitive theory of the skill under investigation. Indeed, the signals that indicate brain structure or function can be meaningfully interpreted only if they are linked to cognitive theories (e.g. Cacioppo, Berntson, & Nusbaum, 2008 ; De Smedt et al., 2011 ), and this represents a necessary step in studies in cognitive neuroscience. In order for these methods to be used in educational research, a well-developed detailed cognitive theory of the phenomenon under investigation is required.

Cognitive Neuroscience Methods as Tools in Educational Research

When are these cognitive neuroscience methods applicable to educational research? This should depend on the specific research question at hand (De Smedt, 2014 ). A nice analogy to understand when cognitive neuroscience methods might be employed in educational research was provided by Stern and Schneider ( 2010 ). They compared the application of these methods to the use of a digital road map. These maps allow one to adjust the zoom level or resolution, depending on the level of detail (macro or highways vs. micro or alleys) the map viewer is looking for. Some types of educational research focus on very broad large-scale phenomena (macro-level), as is the case in research on educational systems, which are at a low level of resolution, as the map needs to contain the broader environment. Other questions aim to discover very specific cognitive processes (micro-level) that underlie the learning of specific skills, for example the types of representations that are used when executing certain arithmetic strategies. These cognitive processes can be difficult to measure via behavior, through tests, questionnaires, and observations, and require measurement at a high level of resolution. It is at this high level of resolution that cognitive neuroscience methods can be applied to educational research. It is important to note that not all educational research is at the same level of resolution and, as a result, cognitive neuroscience methods are not relevant for all educational research. It is only when a micro-level of understanding is required that cognitive neuroscience methods can be applied to educational research. This is particularly true when educational research adopts a positivist paradigm, as this paradigm is also key to cognitive neuroscience. In all, these cognitive neuroscience methods can be applied to some but not all types of educational research.

The application of cognitive neuroscience methods to educational research is particularly relevant when these micro-level research questions are difficult to answer with behavioral data alone; and for similar reasons such methods have been useful to the field of psychology too (Cacioppo, Berntson, & Nusbaum, 2008 ). One example comes from research on cognitive load in which EEG data have been used to continuously measure cognitive load during specific instructional interventions, for example the learning from hypertext and multimedia (Antonenko, Paas, Grabner, & van Gog, 2010 ). These EEG measures allowed these researchers to detect subtle changes in cognitive load during the instruction, whereas such differences would have been difficult to capture with standard behavioral measures of cognitive load. More recently, Anderson, Pyke, and Fincham ( 2016 ) used fMRI to identify different cognitive stages (i.e., encoding, planning, solving, and responding) and their duration during mathematical problem-solving at the level of an individual trial. This approach offers exciting opportunities to investigate different stages of problem-solving and their time course, which are difficult (or sometimes impossible) to detect by analyzing errors, reaction times, or verbal protocol data.

Promising Applications

Causal models of atypical development.

One of the most-cited applications of neuroscience to educational research has been that it contributes to our understanding of atypical development (Butterworth, Varma, & Laurillard, 2011 ; Gabrieli, 2009 ; Goswami, 2004 ; Royal Society, 2011 ). More specifically, disorders in the acquisition of school-relevant skills, such as dyslexia and dyscalculia, have been grouped under the term neurodevelopmental disorders , a category that also includes autism spectrum disorder, ADHD, and intellectual disability, all of which have serious consequences when the child enters school and for which educational interventions have been developed. The neuro in neurodevelopmental disorders refers to the idea that the origin of these disorders is biological and that they result from aberrant brain structure and/or function. The precise biological causes of these disorders remain unknown, but neuroimaging research has made considerable progress in understanding brain structure and function in these conditions. For example, research on dyslexia, a neurodevelopmental disorder that is characterized by specific and persistent deficits in learning to read, has revealed abnormalities in the brain networks that are used for the processing of phonemes (Eden, Olulade, Evans, Krafnick, & Alkire, 2016 ; Gabrieli, 2009 ), which is an important prerequisite for learning to read (Melby-Lervag, Lyster, & Hulme, 2012 ). Similarly, it has been shown that the brain networks that support the processing of numerical magnitudes, a skill that is very important for learning to calculate (De Smedt, Noël, Gilmore, & Ansari, 2013 ; Schneider et al., 2017 ), are impaired in individuals with dyscalculia, who experience serious and life-long difficulties in basic calculations (Butterworth et al., 2011 ). These neuroimaging data then also suggested that the cognitive skills that are subserved by these brain networks should be the particular focus of educational remedial interventions (Butterworth et al., 2011 ; Gabrieli, 2009 ; McCandliss, 2010 ; Shaywitz, Morris, & Shaywitz, 2009 ). Effective interventions that target specific deficits in phonological processing (e.g., Snowling & Hulme, 2011 ) and numerical magnitude understanding (e.g., Dyson, Jordan, & Glutting, 2013 ) in children with learning disabilities have been developed, and even their effects on brain structure and function have been studied (e.g., Barquero, Davis, & Cutting, 2014 ; Fraga González et al., 2016 ; Kucian et al., 2011 ).

On the other hand, it needs to be empirically verified whether the abnormalities in these brain networks are truly the cause of these learning disabilities or whether these brain abnormalities are the consequence of poor academic achievement. This remains unknown, as most of the existing body of data are correlational as well as cross-sectional, and they simply report associations between a particular disorder and brain abnormalities, which do not allow one to determine the direction of associations and their causality (Goswami, 2008 ). Interestingly, recent studies are now beginning to show that the brain abnormalities in phonological processing areas in dyslexia are already present before children learn to read, and that they predict later reading acquisition. These studies depart from the genetic nature of dyslexia (Snowling & Melby-Lervag, 2016 ) and compare children with a family risk for dyslexia (i.e., first-degree relative with dyslexia) to those without such a risk. Children with a family risk for dyslexia can be identified before the onset of formal education (e.g., in preschool), and their brain development can be characterized. As soon as it is possible to clinically diagnose dyslexia (i.e., in second grade), data can retrospectively be analyzed by comparing children with and without dyslexia before and during the early years of schooling. This allows one to disentangle causes (abnormalities that are already present before children learn to read) from consequences (abnormalities that emerge after schooling started, which may be the result of less reading experience). Family risk studies in dyslexia suggest that these brain abnormalities are likely, at least in part, to be the origin of their reading difficulties (Vandermosten, Hoeft, & Norton, 2016 ). In all, if these neurobiological causes of atypical development can be identified and can be detected at an early age, they can be used as markers to predict atypical development and, even further, educational outcome.

Neuro-Prediction

Neuro-prediction (De Smedt & Grabner, 2015 ), or neuroprognosis, refers to the application that brain imaging measures can be used as biomarkers to predict educational outcomes (e.g., Black, Myers, & Hoeft, 2015 ; Hoeft et al., 2007 ) and in particular to early identify children at risk for learning difficulties (Diamond & Amso, 2008 ; Gabrieli, 2009 ; Goswami, 2008 ). More specifically, brain imaging data can be collected before children possess skills that are necessary for traditional behavioral assessment, such as language. This allows the identification of at-risk children before the start of formal education and opens opportunities for early intervention, which may have preventive effects. Attempts to discover such biomarkers have been made in the early detection of dyslexia. Molfese ( 2000 ) showed that ERP responses to speech sounds recorded in newborns discriminated with 81% accuracy those infants who would develop dyslexia at the age of 8. Even though this classification accuracy is significantly beyond chance, it should be interpreted with great caution as it indicates that this biomarker incorrectly diagnoses approximately 20% of the cases (i.e., 20% false positives: incorrectly identifying children without dyslexia as having dyslexia; as well as 20% false negatives: failing to identify children with dyslexia as having dyslexia).

More recently, studies have started to investigate whether brain imaging measures can predict subsequent learning gains (Hoeft et al., 2011 ) or whether they even can predict response to educational interventions (Supekar et al., 2013 ). Supekar et al. ( 2013 ) investigated which brain measures, in addition to behavioral outcomes, predicted the gains of a one-on-one math tutoring intervention. Their data revealed that only volume and connectivity of the hippocampus, and not the behavioral data, predicted the learning gains: the larger the hippocampus before the start of the intervention, the larger the learning gains. This association of learning gains with the size of the hippocampus is not unsurprising, given that this area of the brain is particularly relevant to the consolidation of facts in memory, and that the specific educational intervention under study involved the automatization of arithmetic facts.

These are just the very early steps in trying to predict outcomes of educational interventions on the basis of neuroscientific data. The success of this approach will stand or fall on the quality of the educational interventions that are being investigated. This requires the involvement of educational researchers in these kinds of cognitive neuroscientific studies. Without this, there is a serious risk that such predictive studies are meaningless to both cognitive neuroscience and educational research, due to the lack of theoretical grounding of such studies. And even though brain imaging measures can predict later (reading) achievement, it will be important to determine the value added of these—cost intensive—measures on top of traditional behavioral assessments. There are preliminary data that indeed suggest that neuroimaging data can explain additional variance in academic achievement beyond what is predicted from behavioral measures (Hoeft et al., 2007 ), but this again depends on the careful selection of behavioral measures to predict a given behavior.

Learning at the Biological Level

Neuroimaging studies also allow us to understand learning at the biological level, which adds a new level of analysis to educational theory, for example in models on the acquisition of school-taught skills, such as reading and mathematics. This has the potential to complement as well as extend the existing knowledge that has been obtained on the basis of psychological educational research, and this new level of analysis might lead to a more complete understanding of learning (see also Lieberman, Schreiber, & Ochsner, 2003 , for an application in political science).

One example is the componential understanding of complex cognitive skills taught via education (Dowker, 2005 ). Specifically, Dowker ( 2005 ) argued, against the background of cognitive neuropsychological studies with brain damaged patients (e.g., Dehaene & Cohen, 1997 ), that mathematics is not a unitary skill but instead should be conceived of as consisting of multiple components to which interventions should be tailored. Specifically, these neuropsychological studies showed that brain damaged patients can be selectively impaired in different yet specific areas of mathematics. This fractionation of mathematical skill has then be used to investigate individual differences in these components of mathematical skill and to develop educational interventions that take these components as starting points to tailor interventions to the specific strengths and weaknesses of children in mathematics (Dowker, 2005 ).

Neuroimaging data can also provide construct validity of educational theories. For example, the IQ-discrepancy criterion—the observation that in children with specific learning disorders there should be a clear discrepancy between their IQ and their academic performance—has been criticized for many years in psychological and educational research. This critique has been validated in behavioral studies by showing that individuals with and without a discrepancy between their IQ and poor academic performance do not differ in their cognitive profile and in their response to educational interventions (Fletcher, Lyon, Fuchs, & Barnes, 2007 ). Tanaka et al. ( 2011 ) tested the validity of this observation at the biological level. They compared the brain activity during reading in children with reading difficulties with and without a discrepancy with their IQ. Their findings revealed that the brain activity patterns in the two groups of children did not differ, which converges with the behavioral data that reading difficulties (and their neural correlates) are independent of IQ. These observations also validated the removal of the IQ-discrepancy criterion in the definition of specific learning disorders in the latest version of DSM-V ( Diagnostic and Statistical Manual , American Psychiatric Association, 2013 ), a classification of mental disorders widely used by mental health providers.

This observation of converging evidence has been belittled or disparaged by critics of educational neuroscience (e.g., Bowers, 2016 ; Smeyers, 2016 ), who argue that such findings add nothing to what we already know on the basis of psychological or educational research. However, if a particular mental process has an identified biological substrate, then the theoretical understanding of this process will have more exploratory power if it is constrained by both behavioral and biological data; and consequently, a better explanatory model for a given educational phenomenon will be a better base for grounding educational interventions (Howard-Jones et al., 2016 ).

There are also examples of studies in which neuroimaging data reveal something different (diverging evidence) from what is observed from behavioral data. This is particularly the case when one aims to measure subtle processes that are hard to capture with behavioral data alone. Investigating optimal ways to foster foreign language learning, Morgan-Short, Steinhauer, Sanz, and Ullman ( 2012 ) used ERPs to unravel the differential effects of explicit language education (grammar-focused classroom setting) versus implicit education (immersion setting) on syntactic processing. While both approaches resulted in similar behavioral improvements, as measured by means of a cognitive task in which participants had to judge the correctness of sentences, ERP measures revealed striking differences between the two approaches. The immersion setting resulted in a brain response that was similar to what is observed in native speakers, whereas this was not the case in the grammar-focused setting, suggesting that the first approach may be more beneficial in adult foreign-language learning than the latter.

More recently, Anderson, Pyke, and Fincham ( 2016 ) demonstrated by using multivariate fMRI analyses that it is possible to parse the online problem-solving process of a particular mathematical problem into different cognitive stages (i.e., encoding, planning, solving, and responding). This offers an interesting possibility to investigate the durations of each of these stages and how they change as a function of (different) educational approaches. These processes are very difficult, if not impossible, to capture by means of behavioral data, such as systematic error analysis, the analysis of latencies, or the use of introspective verbal protocols.

Another example of neuroimaging methods that reveal something different than can be observed from behavioral data lies in the detection of compensatory processes that arise in the context of remedial interventions. Various intervention studies in the domain of reading have revealed that these evidence-based programs lead to a normalization of brain activity and structure in those networks that are typically associated with normal reading, and this is accompanied by improvements in behavioral reading performance (e.g., Hoeft et al., 2011 ; Keller & Just, 2009 ; Temple et al., 2003 ). On the other hand, each of these studies also revealed changes in brain circuits not typically associated with reading, such as changes in the right prefrontal cortex, an observation that is suggestive of the involvement of compensatory processes. Such compensatory mechanisms are hard to detect via behavioral data. The precise function of such compensatory processes is often unclear and needs to be better understood but offers a promising avenue for informing future interventions.

Generating Predictions for Educational Research

This study of compensatory processes is also an example of a promising application of cognitive neuroscience to educational research that is more indirect. Specifically, findings from cognitive neuroscience might have the potential to generate new hypotheses on educational phenomena that can be tested in follow-up educational research, and a similar application of neuroscience to psychological research has been described by Aue, Lavelle, and Cacioppo ( 2009 ). In this way, an iterative cycle of interdisciplinary research can be generated (see also Howard-Jones et al., 2016 ). One example comes from the study of the role of finger representations in numerical development (Kaufmann et al., 2008 ). These authors examined brain activity during the comparison of numbers, a crucial skill in mathematical development (e.g., De Smedt et al., 2013 ), in children and adults. While the groups did not differ in their behavioral performance, children showed more activation in those brain areas that are associated with finger movements and grasping, leading Kaufmann et al. ( 2008 ) to suggest that finger-based representations play a more important role in children’s understanding of number, and perhaps should be addressed when designing interventions. The role of finger representations in numerical development has been the focus of a series of recent behavioral studies, but the evidence is mixed to date (Long et al., 2016 ; Wasner et al., 2016 ), leaving it unresolved whether the use of fingers should be encouraged or discouraged when teaching early mathematics.

Effects of Education on Biology

One of the key findings in neuroscience is that our brains are highly plastic, which means that they are shaped by experience, a process referred to as experience-dependent plasticity (Diamond & Amso, 2008 ; Johnson & De Haan, 2011 ), and this process is present throughout life. There are massive developmental changes in brain structure and function that continue into late adolescence (Giedd & Rapoport, 2010 ) and beyond, and these changes are driven by environmental input. One example comes from a study of London taxi drivers (Maguire et al., 2000 ) in which the effect of extensive training in learning how to navigate in the city of London on brain structure was investigated. This study revealed that training navigational skills induced changes in the hippocampi, a structure that is also involved in spatial navigation, of these taxi drivers and that the amount of training correlated with the size of the observed morphological changes in the brain. Importantly, there were individual differences in the extent to which the training affected brain structure, which suggests that plasticity is not unlimited.

Because children spend a large amount of time at school, education is one of the most powerful sources that shapes the development of our brains. There is now an increasing number of studies that investigate how learning to read changes and reorganizes brain structure and function (Dehaene, Cohen, Morais, & Kolinsky, 2015 ; Skeide et al., 2017 ). Literacy acquisition not only constructs brain circuits that become associated with reading but also changes brain circuitry (as well as its connectivity) that is not typically associated with reading, such as the visual ventral stream. These changes have also been observed in formerly illiterate adults, which exemplifies that plastic brain changes can occur at different ages (Dehaene, Cohen, Morais, & Kolinsky, 2015 ; Skeide et al., 2017 ). Another example is provided by Neville et al. ( 2013 ), who designed a family-based intervention program, based on knowledge of the neuroplasticity of attention and parenting research. They showed positive effects of the intervention on preschoolers from low socioeconomic backgrounds, and these effects were observed in electrophysiological measures of brain function, cognitive measures, and parent reports on child behavior. What is currently missing are studies that investigate what precise aspects of these educational experiences affect brain development and the limits of this plasticity, and this clearly represents an area for future studies at the crossroads of cognitive neuroscience and educational research.

Special education represents another area in which the effects of education on the brain have been revealed, through the study of specific remedial interventions on brain structure and function in atypical development (e.g., McCandliss, 2010 ). Such studies have revealed that processes of normalization—brain function becomes more similar to a typically developing control group—and compensation—activity patterns in regions different from what is observed in typically developing children—occur. These patterns on how individuals with atypical learning compensate for their difficulties are very relevant to education, as they may provide novel ways of teaching specific compensation strategies, the effects of which should be investigated in educational research.

It is important to emphasize that studies that investigate the effects of education on the brain should carefully take into account the broader educational context (e.g., participants’ learning histories, teaching materials) in which learning takes place. These variables should not be considered as confounds that should be controlled for. Rather, they should be the focus of interest as variability in these factors will have massive effects on brain structure and function. Future studies should therefore consider how these characteristics of the learning context moderate neural data acquired via brain imaging measures.

Biological Interventions

In addition to investigating the effects of educational interventions on brain structure and function, recent advances have made it possible to directly intervene at the biological level and to use neurophysiological interventions, that is, transcranial electrical brain stimulation or TES, to directly affect brain activity and consequently affect behavior through the change of brain function (Cohen-Kadosh, 2014 ). During TES, a small electrical current is non-invasively applied to the brain via electrodes fixated at the scalp. The current is thought to change the activity level of the cortical regions that are under the electrodes and is assumed to change performance or learning. Various studies show the effects of TES (Cohen-Kadosh, 2014 , for a review). For example, the use of particular types of brain stimulation leads to improved performance in arithmetic (Krause & Cohen-Kadosh, 2013 ), although not all individuals respond in the same way to such stimulation (Krause & Cohen-Kadosh, 2014 ).

The field of brain stimulation is currently in its infancy, and at this point we do not fully understand the underlying mechanisms of these techniques (Schuijer, de Jong, Kupper, & van Atteveldt, 2017 ). It is also crucial to emphasize that TES has an effect only if it is accompanied by traditional behavioral and cognitive training. The existing studies are limited to (mainly healthy) adults and are not applicable to the developing brain, and it remains to be verified if such education-related applications are ethically possible (Schuijer et al., 2017 ).

Methodological Challenges

One of the major issues that warrant attention in the application of cognitive neuroscience to educational research involves the adaptation of neuroscientific designs and data acquisition methods in order to obtain high ecological validity, that is, generalizability of findings in laboratory studies to educational contexts (see also De Smedt & Grabner, 2015 ). A first concern deals with the samples in cognitive neuroscience studies. Most of the existing studies included adults, or even more restricted, university students, as participants in their studies. These samples, which are usually rather small, are probably very homogeneous in terms of their educational and cultural backgrounds. Findings from these studies are not merely generalizable to individuals from more diverse educational and cultural backgrounds, and more diverse samples are probably needed in order to fully capture the complexity of differences in learning and educational trajectories. Furthermore, the number of studies in school-aged children, during which the effects of education on brain structure and function are the most prominent, is, although growing, rather small. This is explained, at least partially, by the adverse effects of moving during the acquisition of imaging data—moving occurs much more in children than in adults—which severely distorts the quality and even usability of imaging data. There could also be issues related to recruitment as well as considerations of local ethics committees that make research in children much more challenging. On the other hand, the number of developmental imaging studies is increasing, and guidelines are being developed in order to apply these methods to children (e.g., Vogel, Matjeko, & Ansari, 2016 ). It is important to point out that in view of the massive changes in brain structure and function through childhood and adolescence and in view of the large effects of the environmental context on brain plasticity, the generalization of adult findings to developmental populations is doubtful, even if these studies investigate effects of educational interventions (Ansari, 2010 ). Ignoring participant variability in terms of age, culture, and educational experiences will seriously limit the potential of cognitive neuroscience studies to educational research.

A second concern deals with the tasks that are being used during the acquisition of the imaging data. These tasks are very basic and quite different from what is being done in the classroom, where a much larger variety and complexity of tasks is employed. One reason to avoid complex tasks is that such tasks are solved by multistep procedures, which involve a plethora of cognitive processes that occur at various points during problem-solving. The more cognitive processes involved, the more difficult it becomes to disentangle them at the neurophysiological level. Another reason is that the signals that are being recorded during brain imaging methods are characterized by a large measurement error and require several trials of the same type of task to reliably estimate the brain response to a particular type of stimulus. fMRI and particularly ERP studies therefore require a lot of trials to be administered, a situation that is very different from assessments in the classroom. Furthermore, the response mode during the acquisition of brain imaging data is restricted to one or a few simple key presses. This is done to avoid movement artifacts. Typically, participants have to verify the correctness of an answer or select one from a few response alternatives, instead of actively producing the answer to a given problem. This again is different from the classroom, and the verification of answers probably induces different strategies, and consequently different (neuro)cognitive processes. Also, vocal responses as well as interactions between individuals are difficult to record due to either the aim to keep movement as minimal as possible to avoid distortion of the imaging data, or the noise of data acquisition methods, as is the case during MRI.

These methodological challenges could be a starting point for more constructive collaborations between neuroscientists and educational researchers that combine paradigms from both traditions (Ansari, De Smedt, & Grabner, 2012 ). One example involves the study of how measures of brain activity that are acquired under strictly controlled laboratory conditions correlate and predict real-world, and thus ecologically valid, measures of classroom learning. Another possibility is to study how environmental variables, such as educational history or cultural background, moderate brain activity during certain basic tasks. Importantly, this will require not only sufficiently large samples of participants who vary in their educational history, but also, and crucially, it will involve a careful analysis of the educational context and how its constituents impact on brain structure and function. This necessitates interdisciplinary projects in which cognitive neuroscientists and educational researchers are working at the same level (e.g., De Smedt et al., 2010 ).

The Scope of Biological Data

An important caveat in applying neuroscience to educational research is the scope of biological data or explanations. There is a belief that a biological explanation for a given psychological or educational phenomenon is considered more reliable, convincing, and deterministic compared to a nonbiological explanation (Beck, 2010 ). More specifically, people rated explanations, particularly incorrect ones, of psychological phenomena as more likely when these explanations referred to the brain (Weisberg, Keil, Goodstein, Rawson, & Gray, 2008 ; Weisberg, Taylor, & Hopkins, 2015 ). This may be so because neuroscientific data or explanations are perceived as causal—even though causality depends on the design of the research and not of the type of data that is being collected—and people tend to prefer information that provides evidence for the cause of an event (Weisberg et al., 2015 ). Nevertheless, it is of utmost importance to emphasize that the data and knowledge gleaned from cognitive neuroscience methods is at the same level, in terms of reliability, validity, and credibility, as data obtained by standard behavioral methods in educational research. There is no hierarchy of knowledge, with one type of data being more convincing than the other, but an appreciation of a variety of data collection methods to better understand educational phenomena (De Smedt et al., 2011 ). Relatedly, it is crucial to emphasize that biology does not mean destiny or that a particular behavior is hardwired or unchangeable. On the contrary, one of the key findings in neuroscience is that our brains are highly plastic, which means that they are shaped by (educational) experience. Furthermore, educational research has revealed that teaching adolescents about the plasticity of brain development—in particular the idea of a growth mindset or the assertion that intelligence is not fixed or hardwired in the brain but is malleable and can be developed—positively influences their attitudes toward learning and consequently their learning performance (Blackwell, Trzesniewski, & Dweck, 2007 ; Mangels, Butterfield, Lamb, Good, & Dweck, 2006 ).

Nuanced Translation Is Crucial

It is important to point out that the application of cognitive neuroscience to educational research is not a panacea or quick fix for unresolved problems in educational research. Indeed, the study of the brains of learners, or the identification of a neural correlate of a particular behavior and its dysfunction, does not readily answer questions about effective teaching and curriculum design. This direct application from highly controlled laboratory settings without translation to the complex and multi-determined realm of education would be a bridge too far (Bruer, 1997 ) and runs the risk of misinterpretations.

Such misinterpretations can occur on both sides. Neuroscientists can be naive and ignorant to the context in which learning takes place (e.g., curriculum, how it is taught, what pedagogical approaches are used) and can be easily tempted to simply convert an experimental task that is sensitive to individual differences in brain activity into an intervention. One example is the training of working memory, where individuals have to simply repeat easy working memory tasks that are used in scientific research (e.g., digit span) to improve academic outcomes. While practicing such tasks improves working memory, its training does not result in better academic skills (Melby-Lervag & Hulme, 2013 ). There are many commercially available packages that focus on the training of working memory, often advertised as brain training games , yet their effects on academic performance have not been scientifically established (Owen et al., 2010 ).

On the other hand, educators might over-interpret findings from brain imaging research. Such over-interpretations of brain imaging findings have been denoted as neuromyths , which may have adverse effects on educational practice (Howard-Jones, 2014b ). Neuromyths are misconceptions about the brain, based on an incorrect interpretation of neuroimaging research, that are used to justify certain types of so-called brain-based educational interventions. One such example, which has been observed in a variety of countries (Howard-Jones, 2014b ), deals with brain-based learning styles, in particular the so-called left-brain and right-brain learners, which are used to justify a variety of educational interventions that are equipped with very costly teaching materials (Howard-Jones, 2014b ). There is no neuroimaging evidence for the existence of left-brained or right-brained thinkers (Nielsen et al., 2013 ), neither is there any support for the effectiveness of teaching practices that are based on this left-brained versus right-brained distinction (Lindell & Kidd, 2011 ).

As a consequence, there is no direct route from cognitive neuroscience to education, and cognitive neuroscience does not reveal what should be taught. Rather, it helps us to understand mechanisms that underlie teaching and learning, which then need to be translated via educational research in strategies and programs to optimize teaching and learning (Howard-Jones et al., 2016 ). This requires merging findings from cognitive neuroscience with educational theories and frameworks of instructional design (e.g., Van Merriënboer & Kirschner, 2007 ). This should result in novel educational approaches, the effects of which should be tested by means of rigorous educational research that ranges from small-scale interventions to larger randomized controlled trials (Sloane, 2008 ). Potentially, the neural markers of such intervention can be subsequently evaluated in laboratory studies to improve an understanding of the mechanisms underlying the intervention. This requires interdisciplinary training of researchers or translators who are versed in educational research but have a solid background in cognitive neuroscience (Ansari et al., 2012 ; De Smedt et al., 2010 ), an aim that is central to current master’s programs in educational neuroscience ; mind, brain, and education ; or neuro-education .

Conclusions

The application of cognitive neuroscience to educational research represents an interdisciplinary endeavor, which is situated at the crossroads of cognitive neuroscience, cognitive psychology, and educational research. From a methodological point of view, (cognitive) neuroscience offers a toolbox of methods that can be applied to educational research. These methods are applicable if one aims to understand very basic cognitive processes, particularly when educational research adopts a positivist paradigm. Promising applications of cognitive neuroscience to educational research, include providing causal models of atypical development, helping to understand learning at the biological level, and enabling the generation of predictions that can be tested in educational research. The development of the human brain is shaped by experience, and an increasing amount of studies investigate the effects of education on brain (re)organization. The application of cognitive neuroscience to educational research is not without challenges, however. These deal with ecological validity, the scope of a biological explanation, and the potential emergence of misunderstandings. Future cycles of translational research that merge cognitive neuroscientific findings and educational theories will contribute to a more solid knowledge base that will improve education.

  • American Psychiatric Association . (2013). Diagnostic and statistical manual of mental disorders (5th ed.). Washington, DC: American Psychiatric Association.
  • Anderson, J. R. , Bothell, D. , Fincham, J. M. , Anderson, A. R. , Poole, B. , & Qin, Y. L. (2011). Brain regions engaged by part- and whole-task performance in a video game: A model-based test of the decomposition hypothesis. Journal of Cognitive Neuroscience , 23 , 3983–3997.
  • Anderson, J. R. , Pyke, A. A. , & Fincham, J. M. (2016). Hidden stages of cognition revealed in patterns of brain activation. Psychological Science , 27 , 1215–1226.
  • Ansari, D. (2010). Neurocognitive approaches to developmental disorders of numerical and mathematical cognition: The perils of neglecting the role of development. Learning and Individual Differences , 20 (2), 123–129.
  • Ansari, D. , De Smedt, B. , & Grabner, R. H. (2012). Neuroeducation—A critical overview of an emerging field. Neuroethics , 5 (2), 105–117.
  • Antonenko, P. , Paas, F. , Grabner, R. , & van Gog, T. (2010). Using electroencephalography to measure cognitive load. Educational Psychology Review , 22 , 425–438.
  • Aue, T. , Lavelle, L. A. , & Cacioppo, J. T. (2009). Great expectations: What can fMRI research tell us about psychological phenomena? International Journal of Psychophysiology , 73 , 10–16.
  • Barquero, L. A. , Davis, N. , & Cutting, L. E. (2014). Neuroimaging of reading intervention: A systematic review and activation likelihood estimate meta-analysis. PLoS One , 9 (1), e83668.
  • Beck, D. M. (2010). The appeal of the brain in the popular press. Perspectives on Psychological Science , 5 (6), 762–766.
  • Black, J. A. , Myers, C. A. , & Hoeft, F. (2015). The utility of neuroimaging studies for informing educational practice and policy in reading disorders. New Directions in Child and Adolescent Development , 147 , 49–56.
  • Blackwell, L. S. , Trzesniewski, K. H. , & Dweck, C. S. (2007). Implicit theories of intelligence predict achievement across an adolescent transition: A longitudinal study and an intervention. Child Development , 78 , 246–263.
  • Bowers, J. S. (2016). The practical and principled problems with educational neuroscience. Psychological Review , 123 (5), 600–612.
  • Bressler, S. L. , & Menon, V. (2010). Large-scale networks in cognition: Emerging methods and principles. Trends in Cognitive Sciences , 14 , 277–290.
  • Bruer, J. T. (1997). Education and the brain: A bridge too far. Educational Researcher , 26 (8), 4–16.
  • Bunge, S. A. , & Souza, M. J. (2009). Executive function and higher-order cognition: Neuroimaging. Encyclopedia of Neuroscience , 4 , 111–116.
  • Butterworth, B. , Varma, S. , & Laurillard, D. (2011). Dyscalculia: From brain to education. Science , 332 , 1049–1053.
  • Cacioppo, J. T. , Berntson, G. G. , & Nusbaum, H. C. (2008). Neuroimaging as a new tool in the toolbox of psychological science. Current Directions in Psychological Science , 17 , 62–67.
  • Cohen-Kadosh, R. (2014). The stimulated brain: Cognitive enhancement using non-invasive brain stimulation . Oxford: Oxford University Press.
  • De Smedt, B. (2014). Advances in the use of neuroscience methods in research on learning and instruction. Frontline Learning Research , 6 , 7–14.
  • De Smedt, B. , Ansari, D. , Grabner, R. H. , Hannula, M. M. , Schneider, M. , & Verschaffel, L. (2010). Cognitive neuroscience meets mathematics education. Educational Research Review , 5 , 97–105.
  • De Smedt, B. , Ansari, D. , Grabner, R. H. , Hannula-Sormunen, M. , Schneider, M. , & Verschaffel, L. (2011). Cognitive neuroscience meets mathematics education: It takes two to tango. Educational Research Review , 6 , 232–237.
  • De Smedt, B. , & Grabner, R. (2015). Applications of neuroscience to mathematics education. In A. Dowker & R. Cohen-Kadosh (Eds.), Oxford handbook of mathematical cognition (pp. 613–636). Oxford: Oxford University Press.
  • De Smedt, B. , Noël, M. P. , Gilmore, C. , & Ansari, D. (2013). The relationship between symbolic and non-symbolic numerical magnitude processing skills and the typical and atypical development of mathematics: A review of evidence from brain and behavior. Trends in Neuroscience and Education , 2 , 48–55.
  • Dehaene, S. , & Cohen, L. (1997). Cerebral pathways for calculation: Double dissociation between rote verbal and quantitative knowledge of arithmetic. Cortex , 33 , 219–250.
  • Dehaene, S. , Cohen, L. , Morais, J. , & Kolinsky, R. (2015). Illiterate to literate: Behavioural and cerebral changes induced by reading acquisition. Nature Reviews Neuroscience , 16 , 234–244.
  • Diamond, A. , & Amso, D. (2008). Contributions of neuroscience to our understanding of cognitive development. Current Directions in Psychological Science , 17 , 136–141.
  • Dick, F. , Lloyd-Fox, S. , Blasi, A. , Elwell, C. , & Mills, D. (2014). Neuroimaging methods. In D. Mareschal , B. Butterworth , & A. Tolmie (Eds.), Educational neuroscience (pp. 13–45). Malden, MA: Wiley-Blackwell.
  • Dikker, S. , Wan, L. , Davidesco, I. , Kaggen, L. , Oostrik, M. , McClintock, J. , et al. (2017). Brain-to-brain synchrony track real-world dynamic group interactions in the classroom. Current Biology , 27 , 1375–1380.
  • Dowker, A. (2005). Individual differences in arithmetic: Implications for psychology, neuroscience and education . New York: Psychology Press.
  • Dyson, N. I. , Jordan, N. C. , & Glutting, J. (2013). A number sense intervention for low-income kindergartners at risk for mathematics difficulties. Journal of Learning Disabilities , 46 , 166–181.
  • Eden, G. F. , Olulade, O. A. , Evans, T. M. , Krafnick, A. J. , & Alkire, D. R. (2016). Developmental dyslexia. In G. Hickok & S. Small (Eds.), The neurobiology of language (pp. 815–826). Amsterdam: Elsevier.
  • Fischer, K. W. (2009). Mind, brain and education: Building a scientific groundwork for teaching and learning. Mind, Brain, and Education , 3 , 3–16.
  • Fletcher, J. M. , Lyon, G. R. , Fuchs, L. S. , & Barnes, M. A. (2007). Learning disabilities: From identification to intervention . New York: Guilford.
  • Fraga González, G. , Žarić, G. , Tijms, J. , Bonte, M. , Blomert, L. , Leppänen, P. H. T. , & van der Molen, M. W. (2016) Responsivity to dyslexia training indexed by the N170 amplitude of the brain potential elicited by word reading. Brain and Cognition , 106 , 42–54.
  • Gabrieli, J. E. D. (2009). Dyslexia a new synergy between education and cognitive neuroscience. Science , 325 , 280–283.
  • Giedd, J. N. , & Rapoport, J. L. (2010). Structural MRI of pediatric brain development: What have we learned and where are we going? Neuron , 67 , 728–734.
  • Goswami, U. (2004). Neuroscience and education. British Journal of Educational Psychology , 74 , 1–14.
  • Goswami, U. (2008). Principles of learning, implications for teaching: A cognitive neuroscience perspective. Journal of Philosophy of Education , 42 , 381–399.
  • Hoeft, F. , McCandliss, B. D. , Black, J. M. , Gantman, A. , Zakerani, N. , Hulme, C. , et al. (2011). Neural systems predicting long-term outcome in dyslexia. Proceedings of the National Academy of Sciences of the United States of America , 108 , 361–366.
  • Hoeft, F. , Ueno, T. , Reiss, A. L. , Whitfield-Gabrieli, S. , Glover, G. H. et al. (2007). Prediction of children’s reading skills using behavioral, functional, and structural neuroimaging measures. Behavioral Neuroscience , 121 , 602–613.
  • Howard-Jones, P. (2010). Introducing neuroeducational research: Neuroscience, education and the brain from contexts to practice . London: Routledge.
  • Howard-Jones, P. (2014a). Neuroscience and education: A review of educational interventions and approaches informed by neuroscience . Bristol, UK: Education Endowment Foundation.
  • Howard-Jones, P. (2014b). Neuroscience and education: Myths and messages. Nature Reviews Neuroscience , 15 , 817–824.
  • Howard-Jones, P. , Varma, S. , Ansari, D. , Butterworth, B. , De Smedt, B. , Goswami, U. , et al. (2016). The principles and practices of educational neuroscience: Commentary on Bowers (2016). Psychological Review , 123 , 620–627.
  • Johnson, M. H. , & de Haan, M. (2011). Developmental cognitive neuroscience (3rd ed.). Malden, MA: Wiley-Blackwell.
  • Kaufmann, L. , Vogel, S. E. , Wood, G. , Kremser, C. , Schocke, M. , Zimmerhackl, L. B. , & Koten, J. W. (2008). A developmental fMRI study of nonsymbolic numerical and spatial processing. Cortex , 44 , 376–385.
  • Keller, T. A. , & Just, M. A. (2009). Altering cortical connectivity: Remediation-induced changes in the white matter of poor readers. Neuron , 64 , 624–631.
  • Klimesch, W. , Schack, B. , & Sauseng, P. (2005). The functional significance of theta and upper alpha oscillations. Experimental Psychology , 52 , 99–108.
  • Krause, B. , & Cohen-Kadosh, R. (2013). Can transcranial electrical stimulation improve learning difficulties in atypical brain development? A future possibility for cognitive training. Developmental Cognitive Neuroscience , 6 , 176–194.
  • Krause, B. , & Cohen-Kadosh, R. (2014). Not all brains are created equal: The relevance of individual differences in responsiveness to transcranial electrical stimulation. Frontiers in Systems Neuroscience , 8 , 25.
  • Kucian, K. , Grond, U. , Rotzer, S. , Henzi, B. , Schoenmann, C. , Plangger, F. , et al. (2011). Mental number line training in children with developmental dyscalculia. Neuroimage , 57 , 782–795.
  • Lieberman, M. D. , Schreiber, D. , & Ochsner, K. N. (2003). Is political cognition like riding a bicycle? How cognitive neuroscience can inform research on political thinking. Political Psychology , 24 , 681–704.
  • Lindell, A. K. , & Kidd, E. (2011). Why right-brain teaching is half-witted: A critique of the misapplication of neuroscience to education. Mind, Brain, and Education , 5 , 121–127.
  • Long, I. , Malone, S. A. , Tolan, A. , Buygoyne, K. , Herron-Dulaney, M. , Witteveen, K. , & Hulme, C. (2016). The cognitive foundations of early arithmetic skills: It is counting and number judgment, but not finger gnosis, that count. Journal of Experimental Child Psychology , 152 (2), 327–334.
  • Maguire, E. A. , Gadian, D. G. , Johnsrude, I. S. , Good, C. D. , Ashburner, J. , Frackowiak, R. S. J. , & Frith, C. D. (2000). Navigation-related structural change in the hippocampi of taxi drivers. Proceedings of the National Academy of Sciences , 97 , 4398–4403.
  • Mangels, J. A. , Butterfield, B. , Lamb, J. , Good, C. , & Dweck, C. S. (2006). Why do beliefs about intelligence influence learning success? A social cognitive neuroscience model. Social Cognitive and Affective Neuroscience , 1 (2), 75–86.
  • Mareschal, D. , Butterworth, B. , & Tolmie, A. (2014). Educational neuroscience . London: Wiley Blackwell.
  • McCandliss, B. D. (2010). Educational neuroscience: The early years. Proceedings of the National Academy of Sciences of the United States of America , 107 , 8049–8050.
  • Melby-Lervag, M. , & Hulme, C. (2013). Is working memory training effective? A meta-analytic review. Developmental Psychology , 49 , 270–291.
  • Melby-Lervag, M. , Lyster, S. A. , & Hulme, C. (2012). Phonological skills and their role in learning to read: A meta-analytic review. Psychological Bulletin , 138 , 322–352.
  • Menon, V. (2015). Arithmetic in the child and adult brain. In R. Cohen-Kadosh & A. Dowker (Eds.), Oxford handbook of numerical cognition (pp. 502–530). Oxford: Oxford University Press.
  • Molfese, D. L. (2000). Predicting dyslexia at 8 years of age using neonatal brain responses. Brain and Language , 72 (3), 238–245.
  • Morgan-Short, K. , Steinhauer, K. , Sanz, C. , & Ullman, M. T. (2012). Explicit and implicit second language training differentially affect the achievement of native-like brain activation patterns. Journal of Cognitive Neuroscience , 24 (4), 933–947.
  • Neville, H. , Stevens, C. , Pakulak, E. , Bell, T. A. , Fanning, J. , Klein, S. , & Isbell, E. (2013). Family-based training program improves brain function, cognition, and behavior in lower socioeconomic status preschoolers. Proceedings of the National Academy of Sciences of the United States of America , 110 , 12138–12143.
  • Nielsen, J. A. , Zielinski, B. A. , Ferguson, M. A. , Lainhart, J. E. , & Anderson, J. S. (2013). An evaluation of the left-brain vs. right-brain hypothesis with resting state functional connectivity magnetic resonance imaging. PLoS One , 8 (8), e71275.
  • Norman, L. , Carlisi, C. , Lukito, S. , Hart, H. , Mataix-Cols, D. , Radua, J. , & Rubia, K. (2016). A comparative meta-analysis of structural and functional brain abnormalities in ADHD and OCD. JAMA Psychiatry , 73 , 815–825.
  • Owen, A. M. , Hampshire, A. , Grahn, J. , Stenton, R. , Dajani, S. , Burns, A. S. , et al. (2010). Putting brain training to the test. Nature , 465 , 775–778.
  • Price, C. J. (2012). A review and synthesis of the first 20 years of PET and fMRI studies of heard speech, spoken language and reading. Neuroimage , 62 , 816–847.
  • Redcay, E. , Dodell-Feder, D. , Pearrow, M. J. , Mavros, P. L. , Kleiner, M. , Gabrieli, J. D. E. , & Saxe, R. (2010). Live face-to-face interaction during fMRI: A new tool for social cognitive neuroscience. Neuroimage , 50 , 1639–1647.
  • Richardson, F. M. , & Price, C. J. (2009). Structural MRI studies of language function in the undamaged brain. Brain Structure and Function , 213 , 511–523.
  • The Royal Society (2011). Brain waves module 2: Neuroscience: Implications for education and lifelong learning . London: The Royal Society.
  • Schneider, M. , Beeres, K. , Coban, L. , Merz, S. , Schmidt, S. , Stricker, J. , & De Smedt, B. (2017). Associations of non-symbolic and symbolic numerical magnitude processing with mathematical competence: A meta-analysis. Developmental Science , 20 , e12372.
  • Schuijer, J. W. , de Jong, I. M. , Kupper, F. , & van Atteveldt, N. M. (2017) Transcranial electrical stimulation to enhance cognitive performance of healthy minors: A complex governance challenge. Frontiers in Human Neuroscience , 11 , 142.
  • Shaywitz, S. E. , Morris, R. , & Shaywitz, B. A. (2009). The education of dyslexic children from childhood to young adulthood. Annual Review of Psychology , 59 , 451–475.
  • Skeide, M. A. , Kumar, U. , Mishra, R. K. , Tripathi, V. N. , Guleria, A. , Singh, J. P. , et al. (2017). Learning to read alters cortico-subcortical cross-talk in the visual system of illiterates. Science Advances , 3 , e1602612.
  • Sloane, F. C. (2008). Randomized trials in mathematics education: Recalibrating the proposed high watermark. Educational Researcher , 9 , 624–630.
  • Smeyers, P. (2016). Neuromyths for educational research and the educational field? In P. Smeyers & M. Depaepe (Eds.), Educational research: Discourses of change and changes of discourse (pp. 71–86). Cham, Switzerland: Springer.
  • Snowling, M. , & Hulme, C. (2011). Evidence-based interventions for reading and language difficulties: Creating a virtuous circle. British Journal of Educational Psychology , 81 , 1–23.
  • Snowling, M. J. , & Melby-Lervag, M. (2016). Oral language deficits in familial dyslexia: A meta-analysis and review. Psychological Bulletin , 142 (5), 498–545.
  • Squire, L. R. , Berg, D. , Bloom, F. E. , Du Lac, S. , Ghosh, A. , & Spitzer, N. C. (2013). Fundamental neuroscience (4th ed.). Oxford: Academic Press.
  • Stern, E. , & Schneider, M. (2010). A digital road map analogy of the relationship between neuroscience and educational research. ZDM—The International Journal on Mathematics Education , 42 , 511–514.
  • Supekar, K. , Swigart, A. J. , Tenison, C. , Jolles, D. D. , Rosenberg-Lee, M. , Fuchs, L. , & Menon, V. (2013). Neural predictors of individual differences in response to math tutoring in primary-grade school children. Proceedings of the National Academy of Sciences , 110 , 8230–8235.
  • Tanaka, H. , Black, J. M. , Hulme, C. , Stanley, L. M. , Kesler, S. R. , Whitfield-Gabrieli, S. , et al. (2011). The brain basis of the phonological deficit in dyslexia is independent of IQ. Psychological Science , 22 (11), 1442–1451.
  • Temple, E. , Deutsch, G. K. , Poldrack, R. A. , Miller, S. L. , Tallal, P. , Merzenich, M. M. , & Gabrieli, J. D. E. (2003). Neural deficits in children with dyslexia ameliorated by behavioral remediation: Evidence from functional MRI. Proceedings of the National Academy of Sciences of the United States of America , 100 , 2860–2865.
  • van Merriënboer, J. J. G. , & Kirschner, P. (2007). Ten steps to complex learning. A systematic approach to four-component instructional design . New York: Lawrence Erlbaum.
  • Vandermosten, M. , Hoeft, F. , & Norton, E. (2016). Integrating MRI brain imaging studies of pre-reading children with current theories of developmental dyslexia: A review and quantitative meta-analysis. Current Opinion in Behavioral Sciences , 10 , 155–161.
  • Vogel, S. , Matjeko, A. , & Ansari, D. (2016). Imaging the developing human brain using functional and structural magnetic resonance imaging: Methodological and practical guidelines. In J. Prior & J. Van Herwegen (Eds.), Practical research with children (pp. 46–69). New York: Routledge.
  • Ward, J. (2006). The student’s guide to cognitive neuroscience . New York: Psychology Press.
  • Wasner, M. , Nuerk, H. C. , Martignon, L. , Roesch, S. , & Moeller, K. (2016). Finger gnosis predicts a unique but small part of variance in initial arithmetic performance. Journal of Experimental Child Psychology , 146 , 1–16.
  • Weisberg, D. S. , Keil, F. C. , Goodstein, J. , Rawson, E. , & Gray, J. R. (2008). The seductive allure of neuroscience explanations. Journal of Cognitive Neuroscience , 20 , 470–477.
  • Weisberg, D. S. , Taylor, J. C. V. , & Hopkins, E. J. (2015). Deconstructing the seductive allure of neuroscience explanations. Judgment and Decision Making , 10 , 429–441.

Related Articles

  • Brain-Based Approaches to the Study of Intelligence
  • Embodied Cognition

Printed from Oxford Research Encyclopedias, Education. Under the terms of the licence agreement, an individual user may print out a single article for personal use (for details see Privacy Policy and Legal Notice).

date: 20 June 2024

  • Cookie Policy
  • Privacy Policy
  • Legal Notice
  • Accessibility
  • [185.80.150.64]
  • 185.80.150.64

Character limit 500 /500

The effectiveness of applied learning: an empirical evaluation using role playing in the classroom

Journal of Research in Innovative Teaching & Learning

ISSN : 2397-7604

Article publication date: 10 December 2018

Issue publication date: 3 December 2019

The purpose of this paper is to evaluate the effectiveness of role playing as an applied learning technique for enhanced classroom experiences as compared to traditional lecture methods.

Design/methodology/approach

This study uses the pre-test/post-test design to conduct experiments with several control and experimental groups. Subjects are graduate students in an MBA program at a private, non-profit university in a traditional classroom setting.

Students in the experimental group gained significantly more knowledge (post-test minus pre-test scores) – 45 percent higher – through participation in the role playing exercise as compared to the control group.

Research limitations/implications

This study represents only a single educational discipline explored using a single role playing learning activity. Impacts on the long-term retention of the knowledge should be studied further.

Practical implications

Educators should enhance their classroom experience with more applied learning activities such as role playing in order to increase knowledge gain and potentially longer knowledge retention.

Originality/value

This study uses a customized role playing activity within a business curriculum as one of many applied learning techniques. The value to students was shown by significantly higher gain in knowledge while simultaneously enhancing their enjoyment of the classroom experience to potentially encourage further lifelong learning.

  • Experimental design
  • Applied learning
  • Teaching styles
  • Enhanced learning
  • Role playing

Acharya, H. , Reddy, R. , Hussein, A. , Bagga, J. and Pettit, T. (2019), "The effectiveness of applied learning: an empirical evaluation using role playing in the classroom", Journal of Research in Innovative Teaching & Learning , Vol. 12 No. 3, pp. 295-310. https://doi.org/10.1108/JRIT-06-2018-0013

Emerald Publishing Limited

Copyright © 2018, Harneel Acharya, Rakesh Reddy, Ahmed Hussein, Jaspreet Bagga and Timothy Pettit

Published in Journal of Research in Innovative Teaching & Learning . Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode

1. Introduction

In the era of globalization, the functions of business are dynamic, and their success depends on how efficiently companies manage their inter-firm relationships with different channel members within the supply chain. Be it a manufacturing or a service company, supply chain management (SCM) plays a vital role in determining the performance of the business. The current research is focused on preparing supply chain leaders to best establish and maintain proper business relationships through the use of a selected applied learning technique – role playing. This work will compare evolving thought on applied learning vs traditional lectures and recommend an alternative way to prepare future managers to serve their supply chain.

1.1 Background

With the rise in competition in the global market, SCM activities are becoming more dynamic, complex and diversified. Therefore, it has become imperative to improve and enhance the skills of managers to deal with the rise in challenges in the supply chain ( Kovacs and Pato, 2014 ; Kotzab et al. , 2018 ). Further, the pace of change, explosion of knowledge and technological revolution have had a great influence on the field of education, raising awareness of the need for new pedagogical and learning approaches ( Bates, 2005 ; Zhang and Nunamaker, 2003 ). In order to prepare students as ready professionals, it is vital that instructors of business courses like SCM know how to best prepare their customers – the students.

Motivating students so that they can understand the importance of integrated-topic scenarios is another issue of concern ( Chang et al. , 2009 ). Therefore, the courses taught at educational institutions are changing their teaching patterns and utilize comprehensive learning solutions instead of focusing on subject knowledge only ( Ten Eyck, 2011 ; Preziosi and Alexakis, 2011 ; Tran and Lewis, 2012 ; Westrup and Planander, 2013 ; Tabak and Lebron, 2017 ; Paul and Ponnam, 2018 ). For instance, in order to determine the quantities to produce, managers need to consider their decisions with regard to demand forecasting, allocation of resources and inventory policy as well as scheduling of manpower and machinery. Hence, it is important that future managers first study different aspects of a decision, then consider them in totality and not in isolation. Several opportunities do exist for advanced learning in applied settings, such as internships, residencies and practical learning courses such as capstone projects with industry clients ( Brown et al. , 2018 ; Silva et al. , 2018 ; Bhattacharya and Neelam, 2018 ). However, this option of applied learning is also available for basic and intermediate courses to meet the changing demands of the student.

A vast body of literature evaluates the effectiveness of different practical and applied approaches to learning that can be embraced by teaching institutes so that they can contribute significantly in producing ready managers. The research by Alessi and Trollip (1985) , Martocchio and Webster (1992) , Quinn (1996) and Li et al. (2017) shows the use of applied gaming tools in teaching different subjects resulted in better knowledge retention and have acted as useful pedagogical tools supplementing the traditional classroom course material.

In light of the changing business environment and the complex dynamics of SCM, there is a need to alter the current traditional pedagogy and equip educators with modern, comprehensive, applied, scenario-based teaching techniques. Therefore, this project will test the effectiveness and benefits of the technique of role playing as just one technique that can be used to implement applied learning in the field of SCM.

1.2 Problem statement

An applied learning role playing workshop to teach a SCM concept will yield higher levels of knowledge gain as compared to the same aspect of SCM taught through traditional learning systems.

In the following sections, this paper will present a comprehensive review of literature regarding applied learning, a research methodology, the analysis of the data and final conclusions.

2. Literature review

The rise in globalization and need for innovation have marked significant changes in supply chains. The increasing role of SCM is associated with the rise in integration among different members in the supply chain ( Christopher et al. , 2011 ). In the case of a manufacturing chain, the manufacturer does not reach directly to the consumers in order to sell its products, and therefore needs channel partners such as wholesalers, distributors and retailers to reach consumers. Being an intermediary, the retailer transfers information about customer tastes and preferences and feedback about the products to the manufacturer. Hence, resellers become a vital part of a SCM. Therefore, the success of the manufacturer is highly dependent upon the relationship between the manufacturer their channel partners in SCM. This project will explore best practices in educating future supply chain managers on the specific process of establishing and maintaining successful business relationships.

2.1 Effective learning and pedagogy

Effective learning and education is characterized by rigorous coursework and effective pedagogical approaches that aim at enhancing the success level for all students ( Rhodes, 2007 ). Hmelo-Silver and Barrows (2006) assume that teaching is a complex task and the instructor should make use of a repertoire of strategies in order to make the learning effective. Effective education is not just about teaching but learning. To enhance learning, students should be equipped with the optimal level of information and assignments ( Preziosi and Alexakis, 2011 ). This means that the information imparted should be justified and not overly demanding. In other words, if it is too demanding, the students will be discouraged and if it is non-challenging, the interest level of the learners will tend to be shattered.

Instructors should focus on enhanced learning that is possible by being flexible and assimilating new subject content while adopting a variety of pedagogical methodologies ( Douglas et al. , 2008 ). It is also vital that the teaching style of the instructor should be effective enough that they can be well understood, and their lessons can be applied by the learners ( Gooden et al. , 2009 ). As provided by Niemi et al. (2010) , effective education can be developed only when the students are involved in the decision-making process of learning and their experiences and stories as learners are heard. Ferguson et al. (2011) also supported this argument and focused on involving the students’ opinions in generating meaningful dialogue and informed decision making. Instructors should be flexible to adopt changes in their teaching practices after understanding the experiences of students in order to develop an effective learning forum ( Robinson and Taylor, 2007 ).

Another tool toward developing effective learning sessions and pedagogy is the ability to scaffold the learning of the students ( Husbands and Pearce, 2012 ). The scaffolding for learning can be provided by offering intellectual, social and emotional support ( James and Pollard, 2011 ). This move will motivate the students to develop interest in their studies. Kim and Hannafin (2011) have posited that the students who do not develop an interest in the studies and learning process are not able to progress. Real-world scenarios are forums for creating interest, motivation and scaffolding. These techniques culminate into a new teaching method called applied learning.

2.2 Applied learning

Learning is known to be effective when the learner is not a passive recipient of knowledge but is a proactive personality in the entire learning process. Active learning is possible when the learner perceives the existence of a relationship between prior knowledge and new knowledge ( Christmas, 2014 ). Bringing the real world to the classroom is viewed as one of the crucial ways to promote effective learning. Therefore, there is an absolute need to transcend from the traditional approaches and pedagogy to new techniques such that the learners can use their learning to solve existential problems ( Hui and Koplin, 2011 ).

Learning-by-doing is considered an apt approach in today’s technology-driven world, wherein continuous learning should be the premise for overall development of the students ( Serbessa, 2006 ). The pedagogical activity in the case of applied learning should be active and involve concrete processes by focusing on the motto of learning by doing ( Schwartzman and Henry, 2009 ). The focus of effective education should be centered toward the learners wherein the role of the instructor is to guide and facilitate the learner and the learning process and not assert control or discharge lecturers toward the targeted learning goals ( Silcock and Brundrett, 2001 ). With the rise in globalization and changing demands of the business world, the learning approaches of the students may need to be altered to provide the best possible fit. In the light of this dynamism, the student-centered approach has become a necessity while deciding upon different learning styles to apply in the classroom ( Hudson, 2009 ). Moreover, the minds of the learners are complex and have a heterogeneous nature and therefore, an effective classroom should consider the constructivist perspective of learning. In order to develop a constructivist perspective of learning, it is vital to implement a student-centered model of instruction which should blend differentiated curriculum and assessment paradigms ( Klein, 2003 ). The learning theory of constructivism aims at making learning meaningful. As per this perspective, learning is a self-directed sociocultural process in which the instructor assumes the role of a facilitator ( Tobin and Tippins, 1993 ).

2.3 Tools of applied learning in SCM

SCM education techniques are similar to those of all other fields. As suggested by Phillips (2005) , applied learning can be achieved through chats, role plays and case studies for application-based education ( Miller et al. , 2016 ). Further, additional tools of applied learning adopted by Ruey (2010) include submitted artifacts, interviews, surveys and observations. Simulation is another technique used for teaching useful learning concepts using applied learning ( Ten Eyck, 2011 ). Instructors can adopt different gaming tools to facilitate real-time applied learning in teaching SCM, logistics management and business relationship management ( Lewis and Maylor, 2007 ; Quinn, 1996 ). These gaming tools are effective in motivating active participation in the course as they foster competition and add excitement in the learning process ( Elgood, 1997 ). Further, Dempsey et al. (1996) established that games lead to a playful learning environment that is goal-oriented, challenging and promoting competitiveness.

Using simulation tools and models in teaching SCM is more insightful and helps to evaluate alternatives and outcomes of supply chains which further enhances performance ( Persson and Araldi, 2009 ). Moreover, it becomes possible to use quantifying measures for problem solving, which is a more reliable and objective approach in teaching ( Katsaliakia et al. , 2014 ). Simulation-based models should be used with caution, however, as the learners may find it difficult to understand the use and importance of these pedagogies ( Chwif et al. , 2000 ). Therefore, it is imperative that the learners become familiar with the use of these applied learning tools and have the competence for successfully implementing these models ( Katsaliakia et al. , 2014 ). Hence, the role of the instructor becomes important in explaining the need and the correct way to implement these applied learning tools so that the learning benefits can be maximized. The most popular simulation game used while teaching SCM is the MIT Beer Game, in which production and distribution in a multi-stage distribution channel is facilitated ( Chang et al. , 2009 ).

2.4 Use of role playing as an effective tool for applied learning

Role playing as an active learning technique involves a high level of participation from students ( Howell, 1991 ; Westrup and Planander, 2013 ; Barnabe, 2016 ; Tabak and Lebron, 2017 ). Role playing is a group activity involving more than one person who assume different roles in a given situation ( Rao and Stupans, 2012 ) with the aim to acquire learning experiences ( Sogunro, 2004 ). As per Yardley-Matwiejczuk (1997) , role playing puts the people in “as if” situations through simulations and actions depending on specific events and circumstances such that different behaviors, roles and arguments directly influence and deepen the learning experience. The preparatory part of role playing is crucial as it involves establishing the descriptions of roles along with the pre-requisites of the involved participants ( Westrup and Planander, 2013 ).

Learning in role playing is facilitated by observing as well as acting out the series of events happening in the respective situation. Role playing helps students to understand the dilemmas in situations and highlight the values of interpretation, which are not possible to study in the traditional lecture mode of teaching ( Bryant and Darwin, 2004 ). As per Rao and Stupans (2012) , role playing helps to provide a better understanding of situations and augments that are critical to the subject matter. Yang and Quadir (2018) found significant evidence to support language learning during the role playing game. Further, McCarthy and Anderson (2000) undertook a study to measure the effectiveness of role playing as compared with the traditional methods of teaching and concluded that the active learning tool of role playing leads to better absorption and retention of information. Not only is it more effective, role play provides a variation from the usual lecture-based pedagogy and thus provides an applied and practical aspect to the theory taught in the classroom ( Sogunro, 2004 ). Role playing is also recognized as a pedagogical tool that helps to depict the manner in which one acts, reacts and communicates in a social interaction ( Daly et al. , 2009 ). Thereby along with critical thinking, it facilitates enhancement of communication and inter-personal skills. Further, as per Rao and Stupans (2012) , role playing provides deeper insight about the business world and the students who take part in it can learn about the situations, issues, alternatives and methods to tackle later in their working life. This means of pedagogy hones the skill sets of students that are required in future employment ( Ruhanen, 2005 ). Sogunro (2004) posited that role playing helps students view a specific situation from different perspectives and not just the single perspective of the instructor or any individual. Therefore, it can be said that role playing acts as an important active learning tool. However, there are drawbacks in regard to student learning as related to the different knowledge background of each student ( Qiu et al. , 2018 ). Another study explored role playing for learning in the marketing field to help students create favorable customer experience without direct supervision from the instructor ( Paul and Ponnam, 2018 ). Business ethics is another well-suited educational realm for role playing learning ( Sauser and Sims, 2018 ). Role playing also helps students to understand the feelings, values, attitudes and body language that provide the context of the business situation at hand. This is very valuable in business education as role playing supports cross-sectoral integration and help students experience the complexity in decision-making ( Ferrero et al. , 2018 ). Moreover, as per Alkin and Christie (2002) , role playing is a useful tool in teaching conflict resolution and preparing students with the necessary skills required for effective conflict management.

2.5 Knowledge retention

According to the Accreditation Council for Pharmacy Education (nd) , schools and colleges should make use of active learning methods so that critical thinking, problem-solving skills and better retention of knowledge can take place. There have been many researchers who concluded that there is a positive relationship between the collaborative and applied learning approaches and higher cognitive and affective outcomes ( Tran and Lewis, 2012 ; Chang et al. , 2009 ; Li et al. , 2017 ; Ferrero et al. , 2018 ). The research has been supported by Johnson and Johnson (2009) who posit that applied learning tools augment the achievement rates of the students and increase their knowledge retention. It has also been shown that cooperative simulations like role playing lead to better academic, social and psychological advantages ( Johnson and Johnson, 2005 ). As per Beck and Chizhik (2008) and Zain et al. (2009) , academic performance improves drastically when the pedagogy contains applied learning tools.

The pre-test and post-test experimental method is the best method to evaluate the performance of students to adjust for previous knowledge of the learners while directly measuring the gain of knowledge for each student ( Kilic, 2008 ; Doymus et al. , 2010 ). The growing literature continues to support applied learning techniques, and this study explores one such option in the realm of SCM education.

3. Methodology

The experiment for this project was conducted with 91 graduate students at a private, non-profit university over the period of 1.5 years with seven different sections of the same course, each course lasting four weeks. These seven classes were pre-selected to approximately divide the sample into control and experimental groups. The control group was comprised of 59 students who were first given a pre-test on the subject matter, followed by the traditional mode of face-to-face lecture by the instructor using a PowerPoint presentation, and finally each student was given a post-test to capture their knowledge gained. The experimental group had 32 students, who were also given a pre-test and then the learning was facilitated using the applied learning technique of role playing, and again a post–test was conducted to assess the knowledge gain. The Appendix shows the questions used for the evaluations, with post-test questions re-sorted.

3.1 Data collection

Traditional teaching for the Control Group used a 30-min lecture on the subject of business relationships in the context of SCM, focusing on the partnership model (see Lambert et al. , 2004, 2010 ). In contrast, the applied learning was facilitated using a role playing exercise for the experimental group. The exercise required a real-world business relationship consisting of a seller and a buyer, as selected by the students. For example, one student group self-selected the Coca-Cola Company selling soft drinks to the McDonald’s Corporation – guidelines were that both companies be familiar to all students in the class. The role playing exercise took 45 min, in which the first 10 min explained the research study and the partnership model. Thereafter, the experimental classes were randomly divided into two groups, whereby the first group played various roles in the Coca-Cola Company (e.g. the supplier), such as the President/Chief Executive Officer, Vice President for Supply Management, Vice President for Operations and Vice President for Distribution, while the other group performed similar roles for the McDonald’s Corporation (e.g. the buyer). Both the groups were then given time to find background data on the company, such as identifying the actual president or CEO, location of the headquarters, number of employees, number of stores, revenues and the mission statement of the company. After this, based on the partnership model, the major drivers – “compelling reasons for partnering with the other firm” ( Lambert et al. , 2004 , p. 23) – were identified and scored by each team. Once the drivers were identified by the teams separately, a facilitator session was conducted, whereby both the teams were asked to rate the facilitators jointly. They then selected the proper level of business relationship based on the partnership model and discussed the appropriate actions that these companies should undertake toward increasing performance otherwise not obtainable without a partnership. With such a process, students played an active role in gathering the data and reaching their own conclusions based on their active participation.

3.2 Hypotheses testing

There is a gain in knowledge during the learning activities (lecture and workshop).

There is a greater gain in knowledge for the workshop as compared to the lecture learning activities.

Figure 1 shows the knowledge connections before and after the learning experience for both groups. It should be noted that scoring was evaluated on a scale of 0 to 6 (see Appendix for sample instrument) with partial credit given as appropriate in increments of 1/2, 1/3 and 1/6 points based on the question, so continuous variables were assumed. However, results were also confirmed with the non-parametric Mann-Whitney U -tests used when the data are ordinal or when the assumptions of the t -test are not met ( Boslaugh and Watters, 2008 ).

4. Analysis

This section analyzes the knowledge acquired during the learning sessions, followed by a comparison of knowledge gain between the control and experimental groups. After all experiments were conducted, data were reviewed to test assumptions of normalcy and continuality. Figures 2 and 3 show slight skewness on the lower end for the control group and slight skewness on the upper end for the experimental group, with no outliers except for a single perfect score in the experimental group (role playing).

For a statistical test of the normality assumption, the Shapiro-Wilk Normality Test ( Shapiro and Wilk, 1965 ) was performed, with all four data sets as acceptable at the p <0.05 level (see Table I ), which along with group sample size over 30, the normality assumption holds.

It was next appropriate to verify the equivalency of the groups before making any comparisons. This evaluation of the experimental group as compared to the control group must be done prior to the implementation of the treatment. Therefore, the pre-test scores were used as a direct measure of the level of knowledge of subjects prior to the experimental treatment (e.g. students may have had previous experience with the material or may have reviewed course material prior to the class). The below student’s t -test for two-samples assuming unequal variances verifies this assumption in that the there is no statistically significant difference in means between the two group’s pre-test average scores (see Table II ).

4.1 Evaluation of learning

The next step was to compute the gain in knowledge from both the traditional and applied learning techniques, using each students’ pre- and post-test scores. To verify that learning occurred in each instructional format, scores were tested again using a student’s t -test but this time for paired samples as each subject’s pre- and post-test scores are not independent. The results obtained for the lecture and workshop are given in Tables III and IV ; using each group’s average pre-test score as baseline, the control group scored 53 percent higher after the traditional lecture while the experimental group scored 76 percent higher after their role playing activity. A graphical interpretation can be seen in Figures 4 and 5 . With these results ( Tables II–IV ), the experiment is considered valid and H2 can be validated for both groups so further analysis can proceed to evaluate H3 , which is to compare between the groups.

4.2 Comparison of learning methods

To directly evaluate H3 as to the comparison between learning methods, a two-sample comparison assuming unequal variances was utilized. Since the previous analyses confirmed that the control group and experimental group were equivalent prior to their, respectively, learning activities and that learning did occur in both groups, Table V details the analysis showing a statistically significant higher level of learning by the experimental group ( p <0.10).

Using this role playing applied learning activity, students in the experimental group reported a 45 percent gain in learning in comparison to those attending traditional lectures. To further evaluate this result based on the relatively small sample and moderate variation as expected in any human-subjects experiment, the effect size should also be considered. A standard Cohen’s d can be computed if the standard deviations of the two samples are roughly the same and therefore assumed they are estimating a common population standard deviation. The computed effect size for this experiment using Cohen’s d was 0.360, yielding a relatively small effect size as d <0.50 ( Cohen, 1988 ): Cohen ’ s d = μ 1 − μ 2 σ pooled , where σ pooled = ( σ 1 2 + σ 2 2 ) / 2 .

However, the standard deviations of the two groups did differ slightly (1.094 and 1.398, respectively) so the homogeneity of variance assumption is questionable and pooling the standard deviations may not be appropriate. One solution is to insert the standard deviation of the control group into the equation and calculate the Glass’ Δ ( Glass et al. , 1981 ), assuming that the standard deviation of the control group is untainted by the effects of the experiment and will therefore reflect more closely the true population variance. “The strength of this assumption is directly proportional to the size of the control group. The larger the control group, the more it is likely to resemble the population from which it was drawn” ( Ellis, 2009 ). With the Control Group containing n =51, the Glass Δ was 0.413, yielding a similar small effect size as the Cohen’s d for this relatively small experiment: Glass ′ Δ = μ 1 − μ 2 σ control .

And finally, it should also be considered that the experimental group ( n =30) was smaller than the control group ( n =51). Taking this into account, the Hedges’ g will account for sample size differences by weighting the variances by their respective sample size ( Hedges, 1981 ). Hedges’ g was 0.372, re-confirming again the small effect size: Hedges ′ g = μ 1 − μ 2 σ pooled * , where σ pooled * = ( ( n 1 − 1 ) σ 1 2 + ( n 2 − 1 ) σ 2 2 ) / ( n 1 + n 2 − 2 ) .

5. Recommendations and conclusion

From the analysis of this experiment comparing the learning benefits of traditional lectures vs applied learning using role playing, it was first verified that there was a gain in immediate knowledge in both the modes of learning, confirming H2(a/b) . The goals of this study were met as the analysis showed significantly greater gain in knowledge (post-test verses pre-test for each student) for the role playing groups (1.463 average gain in score) as compared to those exposed only to the traditional lecture (1.012 average gain in score). This represents a valuable 45 percent improvement in learning scores. Therefore, it is determined that the time investment in preparing and executing the role playing activity was of significant value to the customer – the students.

5.1 Limitations

This research was conducted using a single type of Applied Learning – role playing – and using a single discipline within a business school. In addition, the application was tested in the graduate-level educational environment, so findings are not generalizable to other student levels or fields of study. However, findings are in-line with various other studies that reported similar benefits of applied learning techniques ( Brown et al. , 2018 ; Silva et al. , 2018 ; Bhattacharya and Neelam, 2018 ). Larger samples using a variety of applications should continue to be explored.

5.2 Scope for future research

Applied learning has a vast potential in the fields of training and education. The scope for this research has been limited to one topic area (SCM) and one tool of applied learning (role playing). The learning assessment measures in this study (i.e. pre- and post-test) were comprised of only a few multiple-choice questions, but real learning may be better captured using a more extensive test including essay questions that are application-based. It is also possible to conduct a qualitative assessment on the participants to study their subjective experience of applied learning in comparison with traditional learning. Further, in the current research, the independent variable of gain in knowledge was considered, but other variables like the learning experience, quality of learning, time efficiency of learning and the capability to implement the concepts in business and managerial situations should also be considered. And finally, long-term retention of knowledge gained in the classroom may be of even greater importance to meeting the goals of education; longitudinal studies could be conducted at the one-month, six-month and one-year milestones as done with many alumni surveys that attempt to assess the value of the education as assessed after graduation. Because the scope of research in the field of applied learning is vast, continued study and implementation of applied learning techniques is highly recommended.

importance of applied research in education

Visualization of hypotheses

importance of applied research in education

Data visualization from control group (Traditional lecture)

importance of applied research in education

Data visualization from experimental group (role playing)

importance of applied research in education

Histogram of student learning for the control group (traditional lecture)

importance of applied research in education

Histogram of student learning for the Experimental Group (Role Playing)

Test for normality – Shapiro-Wilk normality test ( W )

Control group Experimental group
Sample size 51 31
Pre-test 0.9461** 0.8679***
Post-test 0.9381** 0.9186**
reduced due to subjects with an incomplete Pre- or Post-test. <0.10; 0.05; 0.01

Control group Experimental group
Mean 1.911 1.933
Variance 1.408 0.711
Observations 51 30
Hypothesized mean difference 0
df 76
-Stat 0.0995
( ⩽ ) two-tail 0.9210
critical two-tail 1.9917

Verification that learning occurred – control group (traditional lecture)

Pre-test Post-test
Mean 1.911 2.923
Variance 1.408 1.459
Observations 51 51
Pearson correlation 0.582
Hypothesized mean difference 0
df 50
Stat −6.603
( ⩽ ) one-tail 0.000
critical one-tail 1.676

Verification that learning occurred – experimental group (role playing)

Pre-test Post-test
Mean 1.933 3.397
Variance 0.711 1.600
Observations 30 30
Pearson correlation 0.167
Hypothesized mean difference 0
df 29
Stat −5.732
( ⩽ ) one-tail 0.000
critical one-tail 1.699

Comparison of groups – gain in knowledge (post- minus pre-test scores)

Control group (traditional lecture) Experimental group (role playing)
Mean 1.012 1.463
Variance 1.198 1.956
Observations 51 30
Hypothesized mean difference 0
df 50
Stat −1.516
( ⩽ ) one-tail 0.068
critical one-tail 1.676

Accreditation Council for Pharmacy Education ( n.d. ), “ Accreditation standards and guidelines for the professional program in pharmacy leading to the doctor of pharmacy degree ”, Version 2.0., available at: www.acpe-accredit.org/pdf/FinalS2007Guidelines2.0.pdf (accessed March 5, 2016 ).

Alessi , S. and Trollip , S. ( 1985 ), Computer–Based Instruction: Methods and development , Prentice-Hall , Upper Saddle River, NJ .

Alkin , M. and Christie , C. ( 2002 ), “ The use of role-play in teaching evaluation ”, American Journal of Evaluation , Vol. 23 No. 2 , pp. 209 - 218 .

Barnabe , F. ( 2016 ), “ Policy development and learning in complex business domains: the potentials of role playing ”, International Journal of Business and Management , Vol. 11 No. 12 , pp. 15 - 29 .

Bates , T. ( 2005 ), Teachnology, E-learning and Distance Education , Routledge , New York, NY .

Beck , L. and Chizhik , A. ( 2008 ), “ An experimental study of cooperative learning in CS1 ”, Proceedings of the 39th SIGCSE Technical Symposium on Computer Science Education , New York, NY , pp. 205 - 209 .

Bhattacharya , S. and Neelam , N. ( 2018 ), “ Perceived value of internship experience: a try before you leap ”, Higher Education, Skills and Work-Based Learning , available at: www.emeraldinsight.com/doi/full/10.1108/HESWBL-03-2018-0025

Boslaugh , S. and Watters , P.A. ( 2008 ), Statistics in a Nutshell: A Desktop Quick Reference , OReilly Media , Sebastopol, CA .

Brown , C. , Willett , J. , Goldfine , R. and Goldfine , B. ( 2018 ), “ Sport management internships: recommendations for improving upon experiential learning ”, Journal of Hospitality, Leisure, Sport & Tourism Education , Vol. 22 No. 22 , pp. 75 - 81 .

Bryant , J. and Darwin , J. ( 2004 ), “ Exploring inter-organisational relationships in the health service: an immersive drama approach ”, European Journal of Operational Research , Vol. 152 No. 3 , pp. 655 - 666 .

Chang , Y. , Chen , W. , Yang , Y. and Chao , H. ( 2009 ), “ A flexible web-based simulation game for production and logistics management courses ”, Simulation Modelling Practice and Theory , Vol. 17 No. 7 , pp. 1241 - 1253 .

Christmas , D. ( 2014 ), “ Authentic pedagogy: Implications for education ”, European Journal of Research and Reflection in Educational Sciences , Vol. 2 No. 4 , pp. 51 - 57 .

Christopher , M. , Mena , C. , Khan , O. and Yurt , O. ( 2011 ), “ Approaches to managing global sourcing risk ”, International Journal of Supply Chain Management , Vol. 16 No. 2 , pp. 67 - 81 .

Chwif , L. , Barretto , M. and Paul , R. ( 2000 ), “ On simulation model complexity ”, Proceedings Winter Simulation Conference , IEEE , December , Orlando, FL , pp. 449 - 455 .

Cohen , J. ( 1988 ), Statistical Power Analysis for the Behavioral Sciences , Routledge Academic , New York, NY .

Daly , A. , Grove , S. , Dorsch , M. and Fisk , R. ( 2009 ), “ The impact of improvisation training on service employees in a European airline: a case study ”, European journal of marketing , Vol. 43 No. 3 , pp. 459 - 472 .

Dempsey , J. , Lucassen , B. , Haynes , L. and Casey , M. ( 1996 ), “ Instructional applications of computer games ”, Technical Report No. ED394500, University of South Alabama, Mobile, AL .

Douglas , A. , Miller , B. , Kwansa , F. and Cummings , P. ( 2008 ), “ Students’ perceptions of the usefulness of a virtual simulation in post- secondary hospitality education ”, Journal of Teaching in Travel and Tourism , Vol. 7 No. 3 , pp. 1 - 19 .

Doymus , K. , Karacop , A. and Simsek , U. ( 2010 ), “ Effects of jigsaw and animation techniques on students’ understanding of concepts and subjects in electrochemistry ”, Educational Technology Research and Development , Vol. 58 No. 6 , pp. 671 - 691 .

Elgood , C. ( 1997 ), Handbook of Management Games and Simulation , 6th ed. , Gower Publishing , Aldershot .

Ellis , P. ( 2009 ), “ Effect size equations ”, available at: www.polyu.edu.hk/mm/effectsizefaqs/effect_size_equations2.html (accessed December 5, 2018 ).

Ferguson , D. , Hanreddy , A. and Draxton , S. ( 2011 ), “ Giving students voice as a strategy for improving teacher practice ”, London Review of Education , Vol. 9 No. 1 , pp. 55 - 70 .

Ferrero , G. , Bichai , F. and Rusca , M. ( 2018 ), “ Experiential learning through role-playing: enhancing stakeholder collaboration in water safety plans ”, Water , Vol. 10 No. 2 , pp. 227 - 237 .

Glass , G. , Smith , M. and McGaw , B. ( 1981 ), Meta-Analysis in Social Research , Sage Publications , Beverly Hills, CA .

Gooden , D. , Preziosi , R. and Barnes , F. ( 2009 ), “ An examination of Kolb’s learning style inventory ”, American Journal of Business Education , Vol. 2 No. 3 , pp. 57 - 62 .

Hedges , L. ( 1981 ), “ Distribution theory for glass’s estimator of effect size and related estimators ”, Journal of Educational Statistics , Vol. 6 No. 2 , pp. 107 - 128 .

Hmelo-Silver , C. and Barrows , H. ( 2006 ), “ Goals and strategies of a problem-based learning facilitator ”, The Interdisciplinary Journal of Problem-Based Learning , Vol. 1 No. 1 , pp. 21 - 39 .

Howell , J. ( 1991 ), “ Using role-play as a teaching method ”, Teaching Public Administration , Vol. 12 No. 1 , pp. 69 - 75 .

Hudson , J. ( 2009 ), Pathways Between Eastern and Western Education , Information Age Publishing Inc , Charlotte, NC .

Hui , F. and Koplin , M. ( 2011 ), “ The implementation of authentic activities for learning; a case study in finance education ”, E-Journal of Business Education and Scholarship of Teaching , Vol. 5 No. 1 , pp. 59 - 72 .

Husbands , C. and Pearce , J. ( 2012 ), What Makes Great Pedagogy? Nine Claims from Research. Research and Development Network National Themes: Theme One , National College for School Leadership , Nottingham .

James , M. and Pollard , A. ( 2011 ), “ TLRP’s ten principles for effective pedagogy: rationale, development, evidence, argument and impact ”, Research Papers in Education , Vol. 26 No. 3 , pp. 275 - 328 .

Johnson , D. and Johnson , R. ( 2005 ), “ New developments in social interdependence theory ”, Genetic, Social, & General Psychology Monographs , Vol. 131 No. 4 , pp. 285 - 358 .

Johnson , D. and Johnson , R. ( 2009 ), “ An educational psychology success story: Social interdependence theory and cooperative learning ”, Educational Researcher , Vol. 38 No. 5 , pp. 365 - 379 .

Katsaliakia , K. , Mustafeeb , N. and Kumar , S. ( 2014 ), “ A game-based approach towards facilitating decision making for perishable products: an example of blood supply chain ”, Expert Systems with Applications , Vol. 41 No. 9 , pp. 4043 - 4059 .

Kilic , D. ( 2008 ), “ The effects of the jigsaw technique on learning the concepts of the principles and methods of teaching ”, World Applied Sciences Journal , Vol. 4 No. 1 , pp. 109 - 114 .

Kim , M. and Hannafin , M. ( 2011 ), “ Scaffolding 6th graders’ problem-solving in technology-enhanced science classrooms: a qualitative case study ”, Instructional Science: An International Journal of the Learning Sciences , Vol. 39 No. 3 , pp. 255 - 282 .

Klein , P. ( 2003 ), “ Rethinking the multiplicity of cognitive resources and curricular representations: alternatives to ‘learning styles’ and ‘multiple intelligences ”, Journal of Curriculum Studies , Vol. 35 No. 1 , pp. 45 - 81 .

Kotzab , H. , Teller , C. , Bourlakis , M. and Wünsche , S. ( 2018 ), “ Key competences of logistics and SCM professionals – the lifelong learning perspective ”, Supply Chain Management: An International Journal , Vol. 23 No. 1 , pp. 50 - 64 .

Kovacs , Z. and Pato , B. ( 2014 ), “ Jobs and competency requirements in supply chains ”, Procedia – Social and Behavioral Sciences , Vol. 109 No. 8 , pp. 83 - 91 .

Lambert , D. , Knemeyer , A. and Gardner , J. ( 2004 ), “ Supply chain partnerships: model validation and implementation ”, Journal of Business Logistics , Vol. 25 No. 2 , pp. 21 - 24 .

Lambert , D. , Knemeyer , A.M. and Gardner , J. ( 2010 ), Building High Performance Business Relationships , Supply Chain Management Institute , Sarasota, FL .

Lewis , M. and Maylor , H. ( 2007 ), “ Game playing and operations management education ”, International Journal of Production Economics , Vol. 105 No. 1 , pp. 134 - 149 .

Li , K. , Hall , M. and Bermell-Garcia , P. ( 2017 ), “ Measuring the learning effectiveness of serious gaming for training of complex manufacturing tasks ”, Simulation & Gaming , Vol. 48 No. 6 , pp. 770 - 790 .

McCarthy , J. and Anderson , L. ( 2000 ), “ Active learning techniques versus traditional teaching styles: two experiments from history and political science ”, Innovative Higher Education , Vol. 24 No. 4 , pp. 279 - 294 .

Martocchio , J. and Webster , J. ( 1992 ), “ Effect of feedback and cognitive playfulness on performance in microcomputer software training ”, Personnel Psychology , Vol. 45 No. 3 , pp. 553 - 578 .

Miller , K. , Hill , C. and Miller , A. ( 2016 ), “ Bringing lean six sigma to the supply chain classroom: a problem-based learning case ”, Decision Sciences Journal of Innovative Education , Vol. 14 No. 4 , pp. 382 - 411 .

Niemi , R. , Heikkinen , H. and Kannas , L. ( 2010 ), “ Polyphony in the classroom: reporting narrative action research reflexively ”, Educational Action Research , Vol. 18 No. 2 , pp. 137 - 149 .

Paul , R. and Ponnam , A. ( 2018 ), “ Teaching customer experience quality and its significance in retail management: a role playing game using Chinese Puzzle ‘Tangram’ ”, Decision Sciences Journal of Innovative Education , Vol. 16 No. 2 , pp. 126 - 139 .

Persson , F. and Araldi , M. ( 2009 ), “ The development of a dynamic supply chain analysis tool – integration of SCOR and discrete event simulation ”, International Journal of Production Economics , Vol. 121 No. 2 , pp. 574 - 583 .

Phillips , J. ( 2005 ), “ Strategies for active learning in online continuing education ”, Journal of Continuing Education in Nursing , Vol. 36 No. 2 , pp. 77 - 83 .

Preziosi , R. and Alexakis , G. ( 2011 ), “ A comparison of traditional instructional methods and accelerated learning methods in leadership education ”, International Leadership Journal , Vol. 3 No. 1 , pp. 79 - 89 .

Qiu , T. , Liu , H. and Yin , E. ( 2018 ), “ Learners’ experiences on role-playing collaborative learning supported by ELS: a case study of virtual company program ”, International Conference on Blended Learning, Springer , Cham , pp. 275 - 286 .

Quinn , C. ( 1996 ), “ Designing an instructional game: reflections on quest for independence ”, Education and Information Technologies , Vol. 1 Nos 3-4 , pp. 251 - 269 .

Rao , D. and Stupans , I. ( 2012 ), “ Exploring the potential of role-play in higher education: development of a typology and teacher guidelines ”, Innovations in Education and Teaching International , Vol. 49 No. 4 , pp. 427 - 436 .

Rhodes , T. ( 2007 ), “ Accelerated learning for what? ”, Peer Review , Vol. 9 No. 1 , pp. 9 - 12 .

Robinson , C. and Taylor , C. ( 2007 ), “ Theorizing student voice: values and perspectives ”, Improving Schools , Vol. 10 No. 5 , pp. 5 - 17 .

Ruey , S. ( 2010 ), “ A case study of constructivist instructional strategies for adult online learning ”, British Journal of Educational Technology , Vol. 41 No. 5 , pp. 706 - 720 .

Ruhanen , L. ( 2005 ), “ Bridging the divide between theory and practice: experential learning approaches for tourism and hospitality management education ”, Journal of Teaching in Travel and Tourism , Vol. 5 No. 4 , pp. 33 - 51 .

Sauser , W.I. Jr and Sims , R.R. ( 2018 ), “ Showing business students how to contribute to organizational cultures grounded in moral character ”, in Khosrow-Pou , M. , Clarke , S. , Jennex , M. , Becker , A. and Anttiroiko , A. (Eds), Business Education and Ethics: Concepts, Methodologies, Tools, and Applications , IGI Global , Hershey, PA , pp. 485 - 507 , available at: www.igi-global.com/pdf.aspx?tid=186560&ptid=179832&ctid=15&t=Editorial%20Advisory%20Board

Schwartzman , R. and Henry , K. ( 2009 ), “ From celebration to critical reflection: charting the course of scholarship in applied learning ”, Journal of Applied Learning in Higher Education , Vol. 1 , pp. 23 - 25 , available at: https://files.eric.ed.gov/fulltext/EJ1188538.pdf

Serbessa , D. ( 2006 ), “ Tension between traditional and modern teaching: learning approaches in Ethiopian primary schools ”, Journal of International Cooperation in Education , Vol. 9 No. 1 , pp. 123 - 140 .

Shapiro , S. and Wilk , M. ( 1965 ), “ An analysis of variance test for normality (complete samples) ”, Biometrika , Vol. 52 Nos 3/4 , pp. 591 - 611 .

Silcock , P. and Brundrett , M. ( 2001 ), “ The management consequences of different models of teaching and learning ”, in Middlewood , D. and Burton , M. (Eds), Chapter 3 , Managing the Curriculum , Paul Chapman , London , pp. 35 - 54 .

Silva , P. , Lopes , B. , Costa , M. , Melo , A.I. , Dias , G.P. , Brito , E. and Seabra , D. ( 2018 ), “ The million-dollar question: can internships boost employment? ”, Studies in Higher Education , Vol. 43 No. 1 , pp. 2 - 21 .

Sogunro , O. ( 2004 ), “ Efficacy of role-playing pedagogy in training leaders: some reflections ”, Journal of Management Development , Vol. 23 No. 4 , pp. 355 - 371 .

Tabak , F. and Lebron , M. ( 2017 ), “ Learning by doing in leadership education: experiencing followership and effective leadership communication through role-play ”, Journal of Leadership Education , Vol. 16 No. 2 , pp. 199 - 212 .

Ten Eyck , R. ( 2011 ), “ Simulation in emergency medicine training ”, Pediatric Emergency Care , Vol. 27 No. 4 , pp. 333 - 341 .

Tobin , K. and Tippins , D. ( 1993 ), “ Constructivism as a referent for teaching and learning ”, in Tobin , K. (Ed.), The Practice of Constructivism in Science Education , American Association for the Advancement of Science , Washington, DC , pp. 3 - 21 .

Tran , V. and Lewis , R. ( 2012 ), “ Effects of cooperative learning on students at Giang university in Vietnam ”, International Education Studies , Vol. 5 No. 1 , pp. 86 - 99 .

Westrup , U. and Planander , A. ( 2013 ), “ Role-play as a pedagogical method to prepare students for practice: the students’ voice ”, Högre Utbildning , Vol. 3 No. 3 , pp. 199 - 210 .

Yang , J.C. and Quadir , B. ( 2018 ), “ Effects of prior knowledge on learning performance and anxiety in an English learning online role-playing game ”, Journal of Educational Technology & Society , Vol. 21 No. 3 , pp. 174 - 185 .

Yardley-Matwiejczuk , K. ( 1997 ), Role–Play: Theory & Practice , Sage , London .

Zain , Z. , Subramaniam , G. , Rashid , A. and Ghani , E. ( 2009 ), “ Teaching students’ performance and attitude ”, Canadian Social Science , Vol. 5 No. 6 , pp. 92 - 102 .

Zhang , D. and Nunamaker , J. ( 2003 ), “ Powering e-learning in the new millennium: an overview of e-learning and enabling technology ”, Information Systems Frontiers , Vol. 5 No. 2 , pp. 207 - 218 .

Corresponding author

Related articles, all feedback is valuable.

Please share your general feedback

Report an issue or find answers to frequently asked questions

Contact Customer Support

What is Applied Research? Definition, Types, Examples

Appinio Research · 10.01.2024 · 36min read

What is Applied Research Definition Types Examples

Ever wondered how groundbreaking solutions to real-world challenges are developed, or how innovations come to life? Applied research holds the key. In this guide, we will delve deep into the world of applied research, uncovering its principles, methodologies, and real-world impact.  From harnessing cutting-edge technology to address healthcare crises to revolutionizing industries through data-driven insights, we'll explore the diverse domains where applied research thrives.

What is Applied Research?

Applied research is a systematic and organized inquiry aimed at solving specific real-world problems or improving existing practices, products, or services. Unlike basic research, which focuses on expanding general knowledge, applied research is all about using existing knowledge to address practical issues.

The primary purpose of applied research is to generate actionable insights and solutions that have a direct impact on practical situations. It seeks to bridge the gap between theory and practice by taking existing knowledge and applying it in real-world contexts. Applied research is driven by the need to address specific challenges, make informed decisions, and drive innovation in various domains.

Importance of Applied Research

Applied research holds immense significance across various fields and industries. Here's a list of reasons why applied research is crucial:

  • Problem Solving:  Applied research provides effective solutions to real-world problems, improving processes, products, and services.
  • Innovation:  It drives innovation by identifying opportunities for enhancement and developing practical solutions.
  • Evidence-Based Decision-Making:  Policymakers and decision-makers rely on applied research findings to make informed choices and shape effective policies.
  • Competitive Advantage:  In business, applied research can lead to improved products, increased efficiency, and a competitive edge in the market.
  • Social Impact:  Applied research contributes to solving societal issues, from healthcare improvements to environmental sustainability.
  • Technological Advancement:  In technology and engineering, it fuels advancements by applying scientific knowledge to practical applications.

Applied Research vs. Basic Research

Applied research differs from basic research in several key ways:

  • Objectives:  Applied research aims to address specific practical problems or improve existing processes, while basic research seeks to expand general knowledge.
  • Focus:  Applied research focuses on solving real-world challenges, whereas basic research explores fundamental principles and concepts.
  • Applicability:  Applied research findings are directly applicable to practical situations, while basic research often lacks immediate practical applications.
  • Immediate Impact:  Applied research has a more immediate impact on solving problems and improving practices, whereas basic research may have longer-term or indirect effects on knowledge and innovation.
  • Research Questions:  Applied research formulates research questions related to practical issues, while basic research poses questions to explore theoretical or fundamental concepts.

Understanding these distinctions is essential for researchers, policymakers, and stakeholders in various fields, as it guides the choice of research approach and the expected outcomes of a research endeavor.

Types of Applied Research

Applied research encompasses various types, each tailored to specific objectives and domains. Understanding these types is essential for choosing the right approach to address real-world problems effectively. Here are some common types of applied research, each with its distinct focus and methodologies.

Evaluation Research

Purpose:  Evaluation research assesses the effectiveness, efficiency, and impact of programs, interventions, or policies. It aims to determine whether these initiatives meet their intended goals and objectives.

Methodology:  Researchers employ a range of quantitative and qualitative methods , including surveys, interviews, observations, and data analysis, to evaluate the outcomes and outcomes of programs or interventions.

Example:  Evaluating the impact of a public health campaign aimed at reducing smoking rates by analyzing pre- and post-campaign survey data on smoking habits and attitudes.

Action Research

Purpose:  Action research focuses on solving practical problems within a specific organizational or community context. It involves collaboration between researchers and practitioners to implement and assess solutions.

Methodology:  Action research is iterative and participatory, with researchers and stakeholders working together to identify problems, develop interventions, and assess their effectiveness. It often involves cycles of planning, action, reflection, and adjustment.

Example:  Teachers collaborating with researchers to improve classroom teaching methods and student outcomes by implementing and refining innovative teaching strategies.

Case Study Research

Purpose:   Case study research investigates a particular individual, organization, or situation in-depth to gain a comprehensive understanding of a specific phenomenon or issue.

Methodology:  Researchers collect and analyze a wealth of data, which may include interviews, documents, observations, and archival records. The goal is to provide a detailed and context-rich description of the case.

Example:  A detailed examination of a successful startup company's growth strategies and challenges, offering insights into factors contributing to its success.

Applied Experimental Research

Purpose:  Applied experimental research seeks to establish causal relationships between variables by manipulating one or more factors and observing their impact on outcomes. It helps determine cause-and-effect relationships in real-world settings.

Methodology:  Researchers conduct controlled experiments, similar to those in basic research, but within practical contexts. They manipulate variables and use statistical analysis to assess their effects on specific outcomes.

Example:  Testing the impact of different website designs on user engagement and conversion rates by randomly assigning visitors to various design versions and measuring their interactions.

Survey Research

Purpose:   Survey research involves collecting data from a sample of individuals or organizations to understand their opinions, attitudes, behaviors, or characteristics. It is commonly used to gather quantitative data on specific topics.

Methodology:  Researchers design surveys with carefully crafted questions and administer them to a representative sample of the target population . Statistical analysis is used to draw conclusions based on survey responses.

Example:  Conducting a national survey to assess public sentiment and preferences on environmental conservation initiatives and policies.

These types of applied research provide a framework for approaching real-world challenges systematically. Researchers can choose the most appropriate type based on their research goals, objectives, and the nature of the problem or phenomenon they seek to address. By selecting the right approach, applied researchers can generate valuable insights and practical solutions in various fields and disciplines.

How to Prepare for Applied Research?

In the preparatory phase of your applied research journey, you'll lay the groundwork for a successful study. This phase involves a series of crucial steps that will shape the direction and ethics of your research project.

Identifying Research Questions

One of the key starting points for any applied research endeavor is identifying the right research questions. Your research questions should be clear, specific, and directly related to the problem or issue you aim to address.

  • Engage with Stakeholders:  Reach out to individuals or groups who are affected by or have an interest in the issue you're researching. Their perspectives can help you formulate relevant questions.
  • Consider Feasibility:  Ensure that your research questions are feasible within your available resources, including time, budget, and access to data or participants.
  • Prioritize Impact:  Focus on questions that have the potential to create meaningful change or provide valuable insights in your chosen field.

Formulating Hypotheses

Hypotheses serve as the guiding stars of your research, providing a clear direction for your investigation. Formulating hypotheses is a critical step that sets the stage for testing and validating your ideas.

  • Testable Predictions:  Your hypotheses should be testable and capable of being proven or disproven through empirical research.
  • Informed by Literature:  Base your hypotheses on existing knowledge and insights gained from the literature review. They should build upon what is already known and aim to expand that knowledge.
  • Clarity and Precision:  Write your hypotheses in a clear and precise manner, specifying the expected relationship or outcome you intend to explore.

Literature Review

Conducting a thorough literature review is like embarking on a treasure hunt through existing knowledge in your field. It's a comprehensive exploration of what other researchers have already discovered and what gaps in knowledge still exist.

  • Search Strategies:  Utilize academic databases, journals, books, and credible online sources to search for relevant literature.
  • Analyze Existing Research:  Examine the findings, methodologies, and conclusions of previous studies related to your research topic.
  • Identify Research Gaps:  Look for areas where current knowledge is insufficient or contradictory. These gaps will be the foundation for your own research.

Data Collection Methods

Selecting the proper data collection methods is crucial to gather the information needed to address your research questions. The choice of methods will depend on the nature of your research and the type of data you require.

  • Quantitative vs. Qualitative:  Decide whether you will collect numerical data (quantitative) or focus on descriptive insights and narratives (qualitative).
  • Survey Design :  If surveys are part of your data collection plan, carefully design questions that are clear, unbiased, and aligned with your research goals.
  • Sampling Strategies:  Determine how you will select participants or data points to ensure representativeness and reliability.

Ethical Considerations

Ethical considerations are at the heart of responsible research. Ensuring that your study is conducted ethically and with integrity is paramount.

  • Informed Consent:  Obtain informed consent from participants, ensuring they understand the purpose of the research, potential risks, and their right to withdraw at any time.
  • Confidentiality:  Safeguard participants' personal information and ensure their anonymity when reporting findings.
  • Minimizing Harm:  Take measures to mitigate any physical or emotional harm that participants may experience during the research process.
  • Ethical Reporting:  Accurately represent your research findings, avoiding manipulation or selective reporting that may mislead readers or stakeholders.

By diligently addressing these aspects of research preparation, you are building a solid foundation for your applied research project, setting the stage for effective data collection and meaningful analysis in the subsequent phases of your study.

How to Design Your Research Study?

When it comes to applied research, the design of your study is paramount. It shapes the entire research process, from data collection to analysis and interpretation. In this section, we will explore the various elements that make up the foundation of your research design.

Research Design Types

Your choice of research design is like selecting the blueprint for your research project. Different research design types offer unique advantages and are suited for different research questions. Here are some common research design types:

  • Experimental Design :  In this design, researchers manipulate one or more variables to observe their impact on outcomes. It allows for causal inference but may not always be feasible in applied research due to ethical or practical constraints.
  • Descriptive Design:  This design aims to describe a phenomenon or population without manipulating variables. It is often used when researchers want to provide a snapshot of a situation or gain insights into a specific context.
  • Correlational Design :  In this design, researchers examine relationships between variables without manipulating them. It helps identify associations but does not establish causation.
  • Longitudinal Design :   Longitudinal studies involve collecting data from the same subjects over an extended period. They are valuable for tracking changes or developments over time.
  • Cross-Sectional Design :  This design involves data collection from a diverse group of subjects at a single point in time. It's helpful in studying differences or variations among groups.

Sampling Methods

Sampling methods determine who or what will be included in your study. The choice of sampling method has a significant impact on the generalizability of your findings. Here are some standard sampling methods:

  • Random Sampling:  This method involves selecting participants or data points entirely at random from the population. It ensures every element has an equal chance of being included, which enhances representativeness .
  • Stratified Sampling:  In stratified sampling, the population is divided into subgroups or strata, and then random samples are drawn from each stratum. This method ensures that each subgroup is adequately represented.
  • Convenience Sampling:  Researchers choose subjects or data points that are readily available and accessible. While convenient, this method may lead to sampling bias as it may not accurately represent the entire population.
  • Purposive Sampling:  In purposive sampling, researchers deliberately select specific individuals or groups based on their expertise, experience, or relevance to the research topic. It is often used when seeking specialized knowledge.

Data Collection Tools

Selecting the right data collection tools is essential to gather accurate and relevant information. Your choice of tools will depend on the research design and objectives. Standard data collection tools include:

  • Questionnaires and Surveys:  These structured instruments use standardized questions to gather data from participants. They are suitable for collecting large amounts of quantitative data.
  • Interviews:   Interviews can be structured, semi-structured, or unstructured. They provide an opportunity to gather in-depth, qualitative insights from participants.
  • Observation:  Direct observation involves systematically watching and recording behaviors or events. It's valuable for studying behaviors or phenomena in their natural context.
  • Secondary Data :  Researchers can also utilize existing data sources, such as government reports, databases, or historical records, for their research.

Variables and Measurement

Defining variables and choosing appropriate measurement methods is crucial for ensuring the reliability and validity of your research. Variables are characteristics, phenomena, or factors that can change or vary in your study. They can be categorized into:

  • Independent Variables:  These are the variables you manipulate or control in your study to observe their effects on other variables.
  • Dependent Variables:  These are the variables you measure to assess the impact of the independent variables.

Choosing the right measurement techniques, scales, or instruments is essential to accurately quantify variables and collect valid data. It's crucial to establish clear operational definitions for each variable to ensure consistency in measurement.

Data Analysis Techniques

Once you have collected your data, the next step is to analyze it effectively. Data analysis involves:

  • Data Cleaning:  Removing any errors, inconsistencies, or outliers from your dataset to ensure data quality.
  • Statistical Analysis :  Depending on your research design and data type, you may use various statistical techniques such as regression analysis , t-tests, ANOVA, or chi-square tests.
  • Qualitative Analysis:  For qualitative data, techniques like thematic analysis, content analysis, or discourse analysis help uncover patterns and themes.
  • Data Visualization:  Using graphs, charts, and visual representations to present your data effectively.

Chi-Square Calculator :

t-Test Calculator :

With a solid understanding of research design, sampling methods, data collection tools, variables, and measurement, you are well-equipped to embark on your applied research journey. These elements lay the groundwork for collecting valuable data and conducting meaningful analyses in the subsequent phases of your study.

How to Conduct Applied Research?

Now that you've prepared and designed your research study, it's time to delve into the practical aspects of conducting applied research. This phase involves the execution of your research plan, from collecting data to drawing meaningful conclusions. Let's explore the critical components in this stage.

Data Collection Phase

The data collection phase is where your research plan comes to life. It's a crucial step that requires precision and attention to detail to ensure the quality and reliability of your data.

  • Implement Data Collection Methods:   Execute the data collection methods you've chosen, whether they involve surveys, interviews, observations, or the analysis of existing datasets.
  • Maintain Consistency:  Ensure that data collection is carried out consistently according to your research design and protocols. Minimize any variations or deviations that may introduce bias .
  • Document the Process:  Keep thorough records of the data collection process. Note any challenges, unexpected occurrences, or deviations from your original plan. Documentation is essential for transparency and replication.
  • Quality Assurance:  Continuously monitor the quality of the data you collect. Check for errors, missing information, or outliers. Implement data validation and cleaning procedures to address any issues promptly.
  • Participant Engagement:  If your research involves human participants, maintain open and respectful communication with them. Address any questions or concerns and ensure participants' comfort and willingness to participate.

Data Analysis Phase

Once you've collected your data, it's time to make sense of the information you've gathered. The data analysis phase involves transforming raw data into meaningful insights and patterns.

  • Data Preparation:  Start by organizing and cleaning your data. This includes dealing with missing values, outliers, and ensuring data consistency.
  • Selecting Analysis Methods:  Depending on your research design and data type, choose the appropriate statistical or qualitative analysis methods. Common techniques include regression analysis , content analysis, or thematic coding .
  • Conducting Analysis:  Perform the chosen analysis systematically and according to established protocols. Ensure that your analysis is reproducible by documenting every step.
  • Interpreting Results:  Interpretation involves making sense of your findings in the context of your research questions and hypotheses. Consider the statistical significance of the results and any practical implications they may have.
  • Visualization:  Create visual representations of your data, such as graphs, charts, or tables, to convey your findings effectively. Visualizations make complex data more accessible to a broader audience.

Interpretation of Results

Interpreting research results is a critical step that bridges the gap between data analysis and drawing conclusions. This process involves making sense of the patterns and insights that emerge from your analysis.

  • Relate to Hypotheses:  Determine whether your results support or refute your hypotheses. Be prepared to explain any unexpected findings.
  • Contextualize Findings:  Consider the broader context in which your research takes place. How do your results fit into the larger body of knowledge in your field?
  • Identify Patterns :  Highlight significant trends, correlations, or relationships you've uncovered. Discuss their practical implications and relevance.
  • Acknowledge Limitations:  Be transparent about any limitations in your study that may affect the interpretation of results. This includes sample size, data quality, and potential biases.

Drawing Conclusions

Drawing conclusions is the ultimate goal of your research. It involves synthesizing your findings and answering the research questions you initially posed.

  • Answer Research Questions:  Explicitly address the research questions you formulated at the beginning of your study. State whether your findings confirm or challenge your initial hypotheses.
  • Highlight Insights:  Emphasize the key insights and contributions of your research. Discuss the practical implications of your findings and their relevance to the field.
  • Recommend Actions:  Based on your conclusions, suggest practical steps, recommendations, or future research directions. How can your research contribute to addressing the problem or challenge you investigated?
  • Consider Implications:  Reflect on the broader implications of your research for stakeholders, policymakers, or practitioners in your field.

Common Pitfalls to Avoid

During the data collection, analysis, interpretation, and conclusion-drawing phases, it's essential to be aware of common pitfalls that can affect the quality and integrity of your research.

  • Sampling Bias :  Ensure that your sample is representative of the population you intend to study. Address any bias that may have been introduced during data collection.
  • Data Manipulation:  Avoid manipulating or selectively reporting data to fit preconceived notions. Maintain transparency in your analysis and reporting.
  • Overinterpretation:  Be cautious of drawing overly broad conclusions based on limited data. Acknowledge the limitations of your study.
  • Ignoring Ethical Considerations:  Continuously uphold ethical standards in your research, from data collection to reporting. Protect participants' rights and privacy.
  • Lack of Validation:  Ensure that the methods and tools you use for data collection and analysis are valid and reliable. Validation helps establish the credibility of your findings.

By navigating the data collection, analysis, interpretation, and conclusion-drawing phases with care and attention to detail, you'll be well-prepared to confidently share your research findings and contribute to advancing knowledge in your field.

How to Report Applied Research Results?

Now that you've conducted your applied research and drawn meaningful conclusions, it's time to share your insights with the world. Effective reporting and communication are crucial to ensure that your research has a real impact and contributes to the broader knowledge base.

Writing Research Reports

Writing a comprehensive research report is the cornerstone of communicating your findings. It provides a detailed account of your research process, results, and conclusions. Here's what you need to consider:

Structure of a Research Report

  • Title:  Create a concise, informative title that reflects the essence of your research.
  • Abstract:  Summarize your research in a clear and concise manner, highlighting key objectives, methods, results, and conclusions.
  • Introduction:  Provide an overview of your research topic, objectives, significance, and research questions.
  • Literature Review:  Summarize relevant literature and identify gaps in existing knowledge that your research addresses.
  • Methodology:  Describe your research design, sampling methods, data collection tools, and data analysis techniques.
  • Results:  Present your findings using tables, charts, and narratives. Be transparent and objective in reporting your results.
  • Discussion:  Interpret your results, discuss their implications, and relate them to your research questions and hypotheses.
  • Conclusion:  Summarize your main findings, their significance, and the implications for future research or practical applications.
  • References:  Cite all sources and studies you referenced in your report using a consistent citation style (e.g., APA, MLA).

Writing Tips

  • Use clear and concise language, avoiding jargon or overly technical terms.
  • Organize your report logically, with headings and subheadings for easy navigation.
  • Provide evidence and data to support your claims and conclusions.
  • Consider your target audience and tailor the report to their level of expertise and interest.

Creating Visualizations

Visualizations are powerful tools for conveying complex data and making your research findings more accessible. Here are some types of visualizations commonly used in research reports:

Charts and Graphs

  • Bar Charts:  Ideal for comparing categories or groups.
  • Line Charts:  Effective for showing trends or changes over time.
  • Pie Charts:  Useful for displaying proportions or percentages.
  • Data Tables:  Present numerical data in an organized format.
  • Cross-tabulations:  Show relationships between variables.

Diagrams and Maps

  • Flowcharts:  Visualize processes or workflows.
  • Concept Maps:  Illustrate connections between concepts.
  • Geographic Maps:  Display spatial data and patterns.

When creating visualizations:

  • Choose the correct type of visualization for your data and research questions.
  • Ensure that visualizations are labeled, clear, and easy to understand.
  • Provide context and explanations to help readers interpret the visuals.

Presenting Your Research

Presenting your research to an audience is an opportunity to engage, educate, and inspire. Whether it's through a conference presentation, seminar, or webinar, effective communication is vital.

  • Know Your Audience:  Tailor your presentation to the interests and expertise of your audience.
  • Practice:  Rehearse your presentation to ensure a smooth delivery and confident demeanor.
  • Use Visual Aids:  Enhance your presentation with visual aids such as slides, images, or videos.
  • Engage with Questions:  Encourage questions and discussions to foster interaction and clarify points.
  • Stay within Time Limits:  Respect time constraints and stay on schedule.

Peer Review Process

Before your research is published, it typically undergoes a peer review process. This involves experts in your field evaluating the quality, validity, and significance of your work. The peer review process aims to ensure the integrity and credibility of your research.

  • Submission:  Submit your research manuscript to a journal or conference for review.
  • Editorial Review:  The editorial team assesses your submission's fit with the journal's scope and may conduct an initial review for quality and compliance.
  • Peer Review:  Your manuscript is sent to peer reviewers who evaluate it for methodology, validity, significance, and adherence to ethical standards.
  • Feedback and Revision:  Based on reviewers' feedback, you may be asked to revise and improve your research.
  • Acceptance or Rejection:  After revisions, the manuscript is reevaluated, and a decision is made regarding publication.

Publishing Your Research

Publishing your research is the final step in sharing your findings with the broader scientific community. It allows others to access and build upon your work. Consider the following when choosing where to publish:

  • Journal Selection:  Choose a reputable journal that aligns with your research field and target audience.
  • Review Process:  Understand the journal's peer review process and requirements for submission.
  • Open Access:  Consider whether you want your research to be open access, freely accessible to all.

Once published, actively promote your research through academic networks, conferences, and social media to maximize its reach and impact.

By effectively reporting and communicating your research findings, you contribute to the advancement of knowledge, inspire others, and ensure that your hard work has a meaningful impact on your field and beyond.

Applied Research Examples

To provide a deeper understanding of applied research's impact and relevance, let's delve into specific real-world examples that demonstrate how this type of research has addressed pressing challenges and improved our lives in tangible ways.

Applied Medical Research: mRNA Vaccines

Example:  mRNA (messenger RNA) vaccine technology, exemplified by the COVID-19 vaccines developed by Pfizer-BioNTech and Moderna, is a remarkable achievement in the field of applied medical research.

Applied researchers in this domain utilized mRNA technology to create vaccines that provide immunity against the SARS-CoV-2 virus. Unlike traditional vaccines, which use weakened or inactivated viruses, mRNA vaccines instruct cells to produce a harmless spike protein found on the virus's surface. The immune system then recognizes this protein and mounts a defense, preparing the body to combat the actual virus.

Impact:  The rapid development and deployment of mRNA vaccines during the COVID-19 pandemic have been groundbreaking. They've played a crucial role in controlling the spread of the virus and saving countless lives worldwide. This example underscores how applied research can revolutionize healthcare and respond swiftly to global health crises.

Environmental Science and Applied Research: Ocean Cleanup

Example:  The Ocean Cleanup Project, founded by Boyan Slat, is an ambitious endeavor rooted in applied research to combat plastic pollution in the world's oceans.

This project employs innovative technology, such as large-scale floating barriers and autonomous systems, to collect and remove plastic debris from the ocean. Applied researchers have played a pivotal role in designing, testing, and optimizing these systems to make them efficient and environmentally friendly.

Impact:  The Ocean Cleanup Project is a testament to the power of applied research in addressing pressing environmental challenges. By removing plastic waste from the oceans, it mitigates harm to marine ecosystems and raises awareness about the urgent need for sustainable waste management.

Business and Applied Research: E-commerce Personalization

Example:   E-commerce giants like Amazon and Netflix use applied research to develop sophisticated personalization algorithms that tailor product recommendations and content to individual users.

Applied researchers in data science and machine learning analyze user behavior, preferences, and historical data to create recommendation systems. These algorithms utilize predictive analytics to suggest products, movies, or shows that align with a user's interests.

Impact:  The application of research-driven personalization has transformed the e-commerce and streaming industries. It enhances user experiences, increases customer engagement, and drives sales by presenting customers with products or content they are more likely to enjoy.

Education and Applied Research: Flipped Classroom Model

Example:  The Flipped Classroom Model is an applied research-based teaching approach that has gained popularity in education.

In this model, instructors leverage technology to deliver instructional content (such as video lectures) outside of class, allowing in-class time for active learning, discussions, and problem-solving. Applied research has informed the design and implementation of this pedagogical approach.

Impact:  The Flipped Classroom Model has shown promise in enhancing student engagement and learning outcomes. It capitalizes on research findings about how students learn best, emphasizing active participation and collaborative learning.

Agriculture and Applied Research: Precision Agriculture

Example:  Precision agriculture employs data-driven technology and applied research to optimize farming practices.

Farmers utilize satellite imagery, sensors, and data analytics to monitor crop conditions, soil health, and weather patterns. Applied research guides the development of precision farming techniques, enabling more efficient resource allocation and reducing environmental impact.

Impact:  Precision agriculture increases crop yields, conserves resources (such as water and fertilizer), and minimizes the ecological footprint of farming. This approach contributes to sustainable and economically viable agriculture.

These real-world examples underscore the versatility and impact of applied research across diverse domains. From healthcare and environmental conservation to business, education, and agriculture, applied research continually drives innovation, addresses critical challenges, and enhances the quality of life for individuals and communities worldwide.

Conclusion for Applied Research

Applied research is a powerful force for solving real-world problems and driving progress. By applying existing knowledge and innovative thinking, we can address healthcare challenges, protect our environment, improve businesses, enhance education, and revolutionize agriculture. Through this guide, you've gained valuable insights into the what, why, and how of applied research, unlocking the potential to make a positive impact in your field. So, go forth, conduct meaningful research, and be part of the solution to the world's most pressing issues. Remember, applied research is not just a concept; it's a practical approach that empowers individuals and teams to create solutions that matter. As you embark on your own applied research endeavors, keep the spirit of inquiry alive, remain open to new ideas, and never underestimate the transformative power of knowledge put into action.

How to Conduct Applied Research in Minutes?

Appinio , a real-time market research platform, is here to revolutionize your approach to applied research. Imagine having the power to get real-time consumer insights at your fingertips, enabling you to make swift, data-driven decisions for your business. Appinio takes care of all the heavy lifting in research and tech, so you can focus on what truly matters.

  • Lightning-Speed Insights:  From posing questions to gaining insights, it takes mere minutes. When you need answers fast, Appinio delivers.
  • User-Friendly:  No need for a PhD in research; our platform is so intuitive that anyone can use it effectively.
  • Global Reach:  Access a diverse pool of respondents from over 90 countries, with the ability to define the perfect target group using 1200+ characteristics.

Register now EN

Get free access to the platform!

Join the loop 💌

Be the first to hear about new updates, product news, and data insights. We'll send it all straight to your inbox.

Get the latest market research news straight to your inbox! 💌

Wait, there's more

Discover future flavors using Appinio predictive insights to stay ahead of consumer preferences.

18.06.2024 | 6min read

Future Flavors: How Burger King nailed Concept Testing with Appinio's Predictive Insights

What is a Pulse Survey Definition Types Questions

18.06.2024 | 32min read

What is a Pulse Survey? Definition, Types, Questions

Pareto Analysis Definition Pareto Chart Examples

30.05.2024 | 29min read

Pareto Analysis: Definition, Pareto Chart, Examples

  • What is Educational Research? + [Types, Scope & Importance]

busayo.longe

Education is an integral aspect of every society and in a bid to expand the frontiers of knowledge, educational research must become a priority. Educational research plays a vital role in the overall development of pedagogy, learning programs, and policy formulation. 

Educational research is a spectrum that bothers on multiple fields of knowledge and this means that it draws from different disciplines. As a result of this, the findings of this research are multi-dimensional and can be restricted by the characteristics of the research participants and the research environment. 

What is Educational Research?

Educational research is a type of systematic investigation that applies empirical methods to solving challenges in education. It adopts rigorous and well-defined scientific processes in order to gather and analyze data for problem-solving and knowledge advancement. 

J. W. Best defines educational research as that activity that is directed towards the development of a science of behavior in educational situations. The ultimate aim of such a science is to provide knowledge that will permit the educator to achieve his goals through the most effective methods.

The primary purpose of educational research is to expand the existing body of knowledge by providing solutions to different problems in pedagogy while improving teaching and learning practices. Educational researchers also seek answers to questions bothering on learner motivation, development, and classroom management. 

Characteristics of Education Research  

While educational research can take numerous forms and approaches, several characteristics define its process and approach. Some of them are listed below:

  • It sets out to solve a specific problem.
  • Educational research adopts primary and secondary research methods in its data collection process . This means that in educational research, the investigator relies on first-hand sources of information and secondary data to arrive at a suitable conclusion. 
  • Educational research relies on empirical evidence . This results from its largely scientific approach.
  • Educational research is objective and accurate because it measures verifiable information.
  • In educational research, the researcher adopts specific methodologies, detailed procedures, and analysis to arrive at the most objective responses
  • Educational research findings are useful in the development of principles and theories that provide better insights into pressing issues.
  • This research approach combines structured, semi-structured, and unstructured questions to gather verifiable data from respondents.
  • Many educational research findings are documented for peer review before their presentation. 
  • Educational research is interdisciplinary in nature because it draws from different fields and studies complex factual relations.

Types of Educational Research 

Educational research can be broadly categorized into 3 which are descriptive research , correlational research , and experimental research . Each of these has distinct and overlapping features. 

Descriptive Educational Research

In this type of educational research, the researcher merely seeks to collect data with regards to the status quo or present situation of things. The core of descriptive research lies in defining the state and characteristics of the research subject being understudied. 

Because of its emphasis on the “what” of the situation, descriptive research can be termed an observational research method . In descriptive educational research, the researcher makes use of quantitative research methods including surveys and questionnaires to gather the required data.

Typically, descriptive educational research is the first step in solving a specific problem. Here are a few examples of descriptive research: 

  • A reading program to help you understand student literacy levels.
  • A study of students’ classroom performance.
  • Research to gather data on students’ interests and preferences. 

From these examples, you would notice that the researcher does not need to create a simulation of the natural environment of the research subjects; rather, he or she observes them as they engage in their routines. Also, the researcher is not concerned with creating a causal relationship between the research variables. 

Correlational Educational Research

This is a type of educational research that seeks insights into the statistical relationship between two research variables. In correlational research, the researcher studies two variables intending to establish a connection between them. 

Correlational research can be positive, negative, or non-existent. Positive correlation occurs when an increase in variable A leads to an increase in variable B, while negative correlation occurs when an increase in variable A results in a decrease in variable B. 

When a change in any of the variables does not trigger a succeeding change in the other, then the correlation is non-existent. Also, in correlational educational research, the research does not need to alter the natural environment of the variables; that is, there is no need for external conditioning. 

Examples of educational correlational research include: 

  • Research to discover the relationship between students’ behaviors and classroom performance.
  • A study into the relationship between students’ social skills and their learning behaviors. 

Experimental Educational Research

Experimental educational research is a research approach that seeks to establish the causal relationship between two variables in the research environment. It adopts quantitative research methods in order to determine the cause and effect in terms of the research variables being studied. 

Experimental educational research typically involves two groups – the control group and the experimental group. The researcher introduces some changes to the experimental group such as a change in environment or a catalyst, while the control group is left in its natural state. 

The introduction of these catalysts allows the researcher to determine the causative factor(s) in the experiment. At the core of experimental educational research lies the formulation of a hypothesis and so, the overall research design relies on statistical analysis to approve or disprove this hypothesis.

Examples of Experimental Educational Research

  • A study to determine the best teaching and learning methods in a school.
  • A study to understand how extracurricular activities affect the learning process. 

Based on functionality, educational research can be classified into fundamental research , applied research , and action research. The primary purpose of fundamental research is to provide insights into the research variables; that is, to gain more knowledge. Fundamental research does not solve any specific problems. 

Just as the name suggests, applied research is a research approach that seeks to solve specific problems. Findings from applied research are useful in solving practical challenges in the educational sector such as improving teaching methods, modifying learning curricula, and simplifying pedagogy. 

Action research is tailored to solve immediate problems that are specific to a context such as educational challenges in a local primary school. The goal of action research is to proffer solutions that work in this context and to solve general or universal challenges in the educational sector. 

Importance of Educational Research

  • Educational research plays a crucial role in knowledge advancement across different fields of study. 
  • It provides answers to practical educational challenges using scientific methods.
  • Findings from educational research; especially applied research, are instrumental in policy reformulation. 
  • For the researcher and other parties involved in this research approach, educational research improves learning, knowledge, skills, and understanding.
  • Educational research improves teaching and learning methods by empowering you with data to help you teach and lead more strategically and effectively.
  • Educational research helps students apply their knowledge to practical situations.

Educational Research Methods 

  • Surveys/Questionnaires

A survey is a research method that is used to collect data from a predetermined audience about a specific research context. It usually consists of a set of standardized questions that help you to gain insights into the experiences, thoughts, and behaviors of the audience. 

Surveys can be administered physically using paper forms, face-to-face conversations, telephone conversations, or online forms. Online forms are easier to administer because they help you to collect accurate data and to also reach a larger sample size. Creating your online survey on data-gathering platforms like Formplus allows you to.also analyze survey respondent’s data easily. 

In order to gather accurate data via your survey, you must first identify the research context and the research subjects that would make up your data sample size. Next, you need to choose an online survey tool like Formplus to help you create and administer your survey with little or no hassles. 

An interview is a qualitative data collection method that helps you to gather information from respondents by asking questions in a conversation. It is typically a face-to-face conversation with the research subjects in order to gather insights that will prove useful to the specific research context. 

Interviews can be structured, semi-structured , or unstructured . A structured interview is a type of interview that follows a premeditated sequence; that is, it makes use of a set of standardized questions to gather information from the research subjects. 

An unstructured interview is a type of interview that is fluid; that is, it is non-directive. During a structured interview, the researcher does not make use of a set of predetermined questions rather, he or she spontaneously asks questions to gather relevant data from the respondents. 

A semi-structured interview is the mid-point between structured and unstructured interviews. Here, the researcher makes use of a set of standardized questions yet, he or she still makes inquiries outside these premeditated questions as dedicated by the flow of the conversations in the research context. 

Data from Interviews can be collected using audio recorders, digital cameras, surveys, and questionnaires. 

  • Observation

Observation is a method of data collection that entails systematically selecting, watching, listening, reading, touching, and recording behaviors and characteristics of living beings, objects, or phenomena. In the classroom, teachers can adopt this method to understand students’ behaviors in different contexts. 

Observation can be qualitative or quantitative in approach . In quantitative observation, the researcher aims at collecting statistical information from respondents and in qualitative information, the researcher aims at collecting qualitative data from respondents. 

Qualitative observation can further be classified into participant or non-participant observation. In participant observation, the researcher becomes a part of the research environment and interacts with the research subjects to gather info about their behaviors. In non-participant observation, the researcher does not actively take part in the research environment; that is, he or she is a passive observer. 

How to Create Surveys and Questionnaires with Formplus

  • On your dashboard, choose the “create new form” button to access the form builder. You can also choose from the available survey templates and modify them to suit your need.
  • Save your online survey to access the form customization section. Here, you can change the physical appearance of your form by adding preferred background images and inserting your organization’s logo.
  • Formplus has a form analytics dashboard that allows you to view insights from your data collection process such as the total number of form views and form submissions. You can also use the reports summary tool to generate custom graphs and charts from your survey data. 

Steps in Educational Research

Like other types of research, educational research involves several steps. Following these steps allows the researcher to gather objective information and arrive at valid findings that are useful to the research context. 

  • Define the research problem clearly. 
  • Formulate your hypothesis. A hypothesis is the researcher’s reasonable guess based on the available evidence, which he or she seeks to prove in the course of the research.
  • Determine the methodology to be adopted. Educational research methods include interviews, surveys, and questionnaires.
  • Collect data from the research subjects using one or more educational research methods. You can collect research data using Formplus forms.
  • Analyze and interpret your data to arrive at valid findings. In the Formplus analytics dashboard, you can view important data collection insights and you can also create custom visual reports with the reports summary tool. 
  • Create your research report. A research report details the entire process of the systematic investigation plus the research findings. 

Conclusion 

Educational research is crucial to the overall advancement of different fields of study and learning, as a whole. Data in educational research can be gathered via surveys and questionnaires, observation methods, or interviews – structured, unstructured, and semi-structured. 

You can create a survey/questionnaire for educational research with Formplu s. As a top-tier data tool, Formplus makes it easy for you to create your educational research survey in the drag-and-drop form builder, and share this with survey respondents using one or more of the form sharing options. 

Logo

Connect to Formplus, Get Started Now - It's Free!

  • education research
  • educational research types
  • examples of educational research
  • importance of educational research
  • purpose of educational research
  • busayo.longe

Formplus

You may also like:

Goodhart’s Law: Definition, Implications & Examples

In this article, we will discuss Goodhart’s law in different fields, especially in survey research, and how you can avoid it.

importance of applied research in education

Assessment Tools: Types, Examples & Importance

In this article, you’ll learn about different assessment tools to help you evaluate performance in various contexts

What is Pure or Basic Research? + [Examples & Method]

Simple guide on pure or basic research, its methods, characteristics, advantages, and examples in science, medicine, education and psychology

User Research: Definition, Methods, Tools and Guide

In this article, you’ll learn to provide value to your target market with user research. As a bonus, we’ve added user research tools and...

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

importance of applied research in education

Home Market Research Research Tools and Apps

Applied Research: Definition, Types & Examples

Applied research is a type of research in which the problem is already known to the researcher. It is used to answer specific questions.

Every research project begins with a clear definition of the investigation’s purpose, which helps to identify the research procedure or approach used. In this sense, a researcher can conduct either basic or applied research.

This research focuses on answering specific questions to solve a specific problem. It tries to identify a solution to a cultural or organizational problem and is often a follow-up research plan for basic or pure research.

In this blog, we will explain the types of applied research and give some examples. But before that, we will go through what it is.

What is applied research?

Applied research is a non-systematic way of finding solutions to specific research problems or issues. These problems or issues can be on an individual, group, or societal level. It is called “non-systematic” because it goes straight to finding solutions.

It is often called a “scientific process” because it uses the available scientific tools and puts them to use to find answers.

Like in regular research, the researcher identifies the problem, makes a hypothesis, and then experiments to test it. It goes deeper into the findings of true or basic research.

LEARN ABOUT:   Research Process Steps

Types of applied research

This research has three types: 

  • Evaluation research, 
  • Research and Development, and 
  • Action research. 

The short versions of each type are explained below:

  • Evaluation research

Evaluation research is one type of applied research. It looks at the information on a research subject. This kind of research leads to objective research or helps people make better decisions sooner. Most of the time, evaluation research is used in business settings. 

The organization uses this research to figure out how the overhead costs can be cut down or cut down a lot.

  • Research and development

Research and Development is the second type of applied research. Its main goal is to create or design new products, goods, or services that meet the needs of certain markets in society. It finds out what the needs of the market are. It focuses on finding new ways to improve products that already meet an organization’s needs.

  • Action research

Action research is the third type of applied research. Action research is a way to learn about things that happen in everyday life and nature. Its goal is to find real-world solutions to business problems by pointing the business in the right direction.

LEARN ABOUT: Action Research

Examples of applied research

Applied study is used in many areas of study and research, from the sciences to the social sciences. We also talk about how it’s used in those fields and give some examples:

  • Applied study in business

Applied study in business sectors is fully dependent on their products and services. It helps organizations understand market needs and trends, and then shape their products to fit customers.

Businesses benefit from This research because it allows them to detect gaps in their findings and obtain primary information on target market preferences.

  • It can improve hiring.
  • It improves work and policy.
  • It identifies workplace skill gaps.
  • Applied study in education

The applied study is used in the education field to test different ways of teaching and to find better ways of teaching and learning. Before implementing new education policies, they are tested to see how well they work, how they affect teaching, and how the classroom works.

Applied education research uses quantitative and qualitative methods to collect data from first-hand sources. This information is then looked at and interpreted differently to generate valuable results or conclusions.

LEARN ABOUT: Qualitative Interview

Most applied research in this field is done to develop and test different ways of doing things by trying them out in different situations. It is based on accurate observations and descriptions of the real world.

  • Applied study to understand the reach of online learning initiatives.
  • Applied study to promote teacher-student classroom engagement.
  • Applied study on the new math program.
  • Applied study in science

As already said, applied study is often called a scientific process because it uses the available scientific tools to find answers. It can be used in physics, microbiology, thermodynamics, and other fields.

  • The applied study is put into practice to cure a disease.
  • The applied study is put into practice to improve agricultural practices.
  • The applied study is applied to testing new laboratory equipment.
  • Applied study in psychology

Researchers use this research in psychology to figure out how people act at work, how HR works, and how the organization is growing and changing so they can come up with solutions.

It is used a lot in areas where researchers try to figure out how people think and then come up with solutions that fit their behavior best.

  • Applied study to figure out new ways to deal with depression.
  • Applied study to improve students’ grades by emphasizing practical Education.
  • Applied study to create a plan to keep employees coming to work regularly.
  • Applied study in health

This research is used to examine new drugs in the medical industry. It combines scientific knowledge and procedures with health experiences to produce evidence-based results.

  • Applied study in heart surgery.
  • Applied study to determine a drug’s efficacy.
  • Applied study on a medicine’s adverse effects.

LEARN ABOUT: Theoretical Research

Applied research is an important way to research because it helps organizations find real-world solutions to specific problems while also increasing their output and productivity. In contrast to basic research, which focuses on making theories that explain things, applied research focuses on describing evidence to find solutions.

In the applied study, the researcher uses qualitative and quantitative methods to collect data, such as questionnaires, interviews, and observation methods. Conducting interviews is one of the examples of qualitative data in education . It helps the researcher collect real-world evidence, which is then tested depending on the type of applied research and the main focus.

At QuestionPro, we give researchers access to a library of long-term research insights and tools for collecting data, like our survey software. Go to InsightHub if you want to see a demo or learn more about it.

FREE TRIAL         LEARN MORE

MORE LIKE THIS

importance of applied research in education

QuestionPro Thrive: A Space to Visualize & Share the Future of Technology

Jun 18, 2024

importance of applied research in education

Relationship NPS Fails to Understand Customer Experiences — Tuesday CX

CX Platforms

CX Platform: Top 13 CX Platforms to Drive Customer Success

Jun 17, 2024

importance of applied research in education

How to Know Whether Your Employee Initiatives are Working

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

40+ Reasons Why Research Is Important in Education

Do you ever wonder why research is so essential in education? What impact does it really have on teaching and learning?

These are questions that plague many students and educators alike.

According to experts, here are the reasons why research is important in the field of education.

Joseph Marc Zagerman, Ed.D. 

Joseph Marc Zagerman

Assistant Professor of Project Management, Harrisburg University of Science and Technology 

Wisdom is knowledge rightly applied. Conducting research is all about gaining wisdom. It can be an exciting part of a college student’s educational journey — be it a simple research paper, thesis, or dissertation. 

Related: What Is the Difference Between Knowledge and Wisdom?

As we know, there is primary research and secondary research: 

  • Primary research is first-hand research where the primary investigator (PI) or researcher uses a quantitative, qualitative, or mixed-methodology approach in gaining original data. The process of conducting primary research is fascinating but beyond the scope of this article. 
  • In contrast, secondary research examines secondhand information by describing or summarizing the work of others. This article focuses on the benefits of conducting secondary research by immersing oneself in the literature.  

Research develops students into becoming more self-sufficient

There are many benefits for college students to engage in scholarly research. For example, the research process itself develops students into becoming more self-sufficient. 

In other words,  students enhance their ability to ferret out information  regarding a specific topic with a more functional deep dive into the subject matter under investigation. 

The educational journey of  conducting research allows students to see the current conversations  taking place regarding a specific topic. One can parse out the congruity and incongruity among scholars about a particular topic. 

Developing one’s  fundamental library skills  is a tremendous upside in becoming self-sufficient. And yet another benefit of conducting scholarly research is reviewing other writing styles, which often enhances one’s reading and writing skills.   

Conducting an annotated bibliography is often a critical first step in conducting scholarly research. Reviewing, evaluating, and synthesizing information from several sources further  develops a student’s critical thinking skills. 

Related: 9 Critical Thinking Examples

Furthermore, in becoming immersed in the literature, students can recognize associated gaps , problems , or opportunities for additional research. 

From a doctoral perspective, Boote & Beile (2005) underscore the importance of conducting a literature review as the foundation for sound research and acquiring the skills and knowledge in analyzing and synthesizing information.  

So, if conducting research is beneficial for college students, why do some college students have problems with the process or believe it doesn’t add value? 

First off, conducting research is hard work . It takes time. Not to make a sweeping generalization, but some college students embrace a  “fast-food”  expectation of academic assignments. 

For example, finish a quiz, complete a discussion board, or watch a YouTube video and check it off your academic to-do list right away. In contrast, conducting a literature review takes time. It’s hard work.

It requires discipline, focus, and effective time management strategies. 

Yet, good, bad, or indifferent, it remains that the process of conducting research is often perceived as a non-value-added activity for many college students. Why is this so? Is there a better way?   

From an educational standpoint, research assignments should not be a “one and done.” Instead, every course should provide opportunities for students to engage in research of some sort. 

If a student must complete a thesis or dissertation as part of their degree requirement, the process should begin early enough in the program. 

But perhaps the most important note for educators is to align the research process with real-world takeaways . That builds value . That is what wisdom is all about. 

Dr. John Clark, PMP 

John Clark

Corporate Faculty (Project Management), Harrisburg University of Science and Technology 

Research provides a path to progress and prosperity

The research integrates the known with the unknown. Research becomes the path to progress and prosperity. Extant knowledge, gathered through previous research, serves as the foundation to attaining new knowledge. 

The essence of research is a continuum.

Only through research is the attainment of new knowledge possible. New knowledge, formed through new research, is contributed back to the knowledge community. In the absence of research, the continuum of knowledge is severed. 

Reminiscent of the continuum of knowledge, the desire and understanding to conduct research must transcend into the next generation. This magnifies the relevance to convey the techniques and the desire to seek new knowledge to the younger generations. 

Humbly, it is argued that education possibly serves to facilitate the importance of research. The synergy between research and education perpetuates the continuum of knowledge. 

Through education, the younger generations are instilled with the inspiration to address the challenges of tomorrow. 

Related: Why Is Education Important in Our Life?

It plants the seeds for scientific inquiry into the next generation

Research, whether qualitative or quantitative , is grounded in scientific methods . Instructing our students in the fundamentals of empirically-based research effectively plants the seeds for scientific inquiry into the next generation. 

The application and pursuit of research catalyze critical thinking . Rather than guiding our students to apply pre-existing and rote answers to yesterday’s challenges, research inspires our students to examine phenomena through new and intriguing lenses. 

The globalized and highly competitive world of today effectively demands the younger generations to think  critically  and  creatively  to respond to the new challenges of the future. 

Consequently, through research and education, the younger generations are  inspired  and  prepared  to find new knowledge that advances our community. Ultimately, the synergy between research and education benefits society for generations to come. 

Professor John Hattie and Kyle Hattie

John Hattie and Kyle Hattie

Authors, “ 10 Steps to Develop Great Learners “

Research serves many purposes

Imagine your doctor or pilot disregarding research and relying on experience, anecdotes, and opinions. Imagine them being proud of not having read a research article since graduation. Imagine them depending on the tips and tricks of colleagues.

Research serves many great purposes, such as:

  • Keeping up to date with critical findings
  • Hearing the critiques of current methods of teaching and running schools
  • Standing on the shoulders of giants to see our world better

Given that so much educational research is now available, reading syntheses of the research, hearing others’ interpretation and implementation of the research, and seeing the research in action helps. 

What matters most is the interpretation of the research — your interpretation, the author’s interpretation, and your colleagues’ interpretation. It is finding research that improves our ways of thinking, our interpretations, and our impact on students. 

There is also much to be gained from reading about the methods of research, which provide ways for us to question our own impact, our own theories of teaching and learning, and help us critique our practice by standing on the shoulders of others. 

Research also helps to know what is exciting, topical, and important.

It enables us to hear other perspectives

Statements without research evidence are but opinions. Research is not only about what is published in journals or books, but what we discover in our own classes and schools, provided we ask,  “What evidence would I accept that I am wrong?” 

This is the defining question separating research from opinion. As humans, we are great at self-confirmation — there are always students who succeed in our class, we are great at finding evidence we were right, and we can use this evidence to justify our teaching. 

But what about those who did not succeed? We can’t be blind about them, and we should not ascribe their lack of improvement to them (poor homes, unmotivated, too far behind) but to us. 

We often need to hear other perspectives of the evidence we collect from our classes and hear more convincing explanations and interpretations about what worked best and what did not; who succeeded and who did not; and were the gains sufficient. 

When we do this with the aim of improving our impact on our students, then everyone is the winner.

It provides explanations and bigger picture interpretations

Research and evaluation on your class and school can be triangulated with research studies in the literature to provide alternative explanations, to help see the importance (or not) of the context of your school. And we can always write our experiences and add to the research.

For example, we have synthesized many studies of how best parents can influence their children to become great learners. Our fundamental interpretation of the large corpus of studies is that it matters more how parents think when engaged in parenting. 

For instance, the expectations, listening and responsive skills, how they react to error and struggle, and whether their feedback was heard, understood, and actionable. 

Research is more than summarizing ; it provides explanations and bigger picture interpretations, which we aimed at in our “10 steps for Parents” book.

Dr. Glenn Mitchell, MPH, CPE, FACEP

Glenn Mitchell

Vice Provost for Institutional Effectiveness , Harrisburg University of Science and Technology 

Research gives us better knowledge workers

There is a tremendous value for our society from student participation in scientific research. At all levels – undergraduate, master’s, and Ph.D. —students learn the scientific method that has driven progress since the Enlightenment over 300 years ago. 

  • They learn to observe carefully and organize collected data efficiently. 
  • They know how to test results for whether or not they should be believed or were just a chance finding. 
  • They learn to estimate the strength of the data they collect and see in other scientists’ published work. 

With its peer review and wide visibility, the publication process demands that the work be done properly , or it will be exposed as flawed or even falsified. 

So students don’t just learn how to do experiments, interviews, or surveys. They learn that the process demands rigor and ethical conduct to obtain valid and reliable results. 

Supporting and educating a new generation of science-minded citizens makes our population more likely to support proven facts and take unproven allegations with a grain of salt until they are rigorously evaluated and reviewed. 

Thus, educating our students about research and involving them with hands-on opportunities to participate in research projects gives us better knowledge workers to advance technology and produce better citizens.

Chris A. Sweigart, Ph.D.

Chris Sweigart

Board Certified Family Physician | Education Consultant, Limened

Research plays a critical role in education as a guide for effective practices, policies, and procedures in our schools. 

Evidence-based practice, which involves educators intentionally engaging in instructional practices and programs with strong evidence for positive outcomes from methodologically sound research, is essential to ensure the greatest probability of achieving desired student outcomes in schools.

It helps educators have greater confidence to help students achieve outcomes

There are extensive options for instructional practices and programs in our schools, many of which are promoted and sold by educational companies. In brief, some of these works benefit students, and others don’t, producing no results or even negatively impacting students.

Educators need ways to filter through the noise to find practices that are most likely to actually produce positive results with students. 

When a practice has been identified as evidence-based, that means an array of valid, carefully controlled research studies have been conducted that show significant, positive outcomes from engaging in the practice. 

By choosing to engage in these practices, educators can have greater confidence in their ability to help students achieve meaningful outcomes.

There are organizations focused on evaluating the research base for programs and practices to determine whether they are evidence-based. 

For example, some websites provide overviews of evidence-based practices in education while my website provides practical guides for teachers on interventions for academic and behavioral challenges with a research rating scale. 

Educators can use these resources to sift through the research, which can sometimes be challenging to access and translate, especially for busy teachers.

It supports vulnerable student populations

Schools may be especially concerned about the success of vulnerable student populations, such as students with disabilities , who are at far greater risk than their peers of poor short and long-term outcomes. 

In many cases, these students are already behind their peers one or more years academically and possibly facing other challenges.

With these vulnerable populations, it’s imperative that we engage in practices that benefit them and do so faster than typical practice—because these students need to catch up! 

That said, every minute and dollar we spend on a practice not supported by research is a gamble on students’ well-being and futures that may only make things worse. 

These populations of students need our best in education, which means choosing practices with sound evidence that are most likely to help.  

If I were going to a doctor for a serious illness, I would want them to engage in practice guided by the cutting edge of medical science to ensure my most significant chance of becoming healthy again. And I want the same for our students who struggle in school.

Will Shaw PhD, MSc

Will Shaw

Sport Scientist and Lecturer | Co-founder, Sport Science Insider

Research creates new knowledge and better ideas

At the foundation of learning is sharing knowledge, ideas, and concepts. However, few concepts are set in stone; instead, they are ever-evolving ideas that hopefully get closer to the truth . 

Research is the process that underpins this search for new and better-defined ideas. For this reason, it is crucial to have very close links between research and teaching. The further the gap, the less informed teaching will become. 

Research provides answers to complicated problems

Another key concept in education is sharing the reality that most problems are complicated — but these are often the most fun to try to solve. Such as, how does the brain control movement? Or how can we optimize skill development in elite athletes?

Here, research can be used to show how many studies can be pulled together to find answers to these challenging problems. But students should also understand that these answers aren’t perfect and should be challenged.

Again, this process creates a deeper learning experience and students who are better equipped for the world we live in.

Basic understanding of research aids students in making informed decisions

We’re already seeing the worlds of tech and data drive many facets of life in a positive direction — this will no doubt continue. However, a byproduct of this is that data and science are commonly misunderstood, misquoted, or, in the worst cases, deliberately misused to tell a false story. 

If students have a basic understanding of research, they can make informed decisions based on reading the source and their own insight. 

This doesn’t mean they have to mean they disregard all headlines instead, they can decide to what extent the findings are trustworthy and dig deeper to find meaning. 

A recent example is this BBC News story  that did an excellent job of reporting a study looking at changes in brain structure as a result of mild COVID. The main finding of a 2% average loss in brain structure after mild COVID sounds alarming and is one of the findings from the study. 

However, if students have the ability to scan the full article  linked in the BBC article, they could learn that: 

  • The measure that decreased by 2% was a ‘proxy’ (estimate) for tissue damage 
  • Adults show 0.2 – 0.3% loss every year naturally
  • Some covid patients didn’t show any loss at all, but the average loss between the COVID and control group was 2%
  • We have no idea currently if these effects last more than a few weeks or months (more research is in progress)

This is an excellent research paper, and it is well-reported, but having the ability to go one step further makes so much more sense of the findings. This ability to understand the basics of research makes the modern world far easier to navigate.

Helen Crabtree

Helen Crabtree

Teacher and Owner, GCSE Masterclass

It enables people to discover different ideas 

Research is crucial to education. It enables people to discover different ideas, viewpoints, theories, and facts. From there, they will weigh up the validity of each theory for themselves. 

Finding these things out for oneself causes a student to think more deeply and come up with their personal perspectives, hypotheses, and even to question widely held facts. This is crucial for independent thought and personal development.

To distortion and manipulation — a frighteningly Orwellian future awaits us if research skills are lost. 

You only need to look at current world events and how freedom of the media and genuine journalistic investigation (or research) is distorting the understanding of the real world in the minds of many people in one of the most powerful countries in the world. 

Only those who are able to conduct research and evaluate the independence of facts can genuinely understand the world. 

Genuine research opens young people’s eyes to facts and opinions

Furthermore, learning how to conduct genuine research instead of merely a Wikipedia or Google search is a skill in itself, allowing students to search through archives and find material that is not widely known about and doesn’t appear at the top of search engines. 

Genuine research will open young people’s eyes to facts and opinions that may otherwise be hidden. This can be demonstrated when we look at social media and its algorithms.

Essentially, if you repeatedly read or “like” pieces with a specific worldview, the algorithm will send you more articles or videos that further back up that view. 

This, in turn, creates an echo chamber whereby your own opinion is repeatedly played back to you with no opposing ideas or facts, reinforcing your view in a one-sided way.

Conducting genuine research is the antidote.

Lastly, by conducting research, people discover how to write articles, dissertations, and conduct their own experiments to justify their ideas. A world without genuine, quality research is a world that is open.

Pritha Gopalan, Ph.D.

Pritha Gopalan

Director of Research and Learning, Newark Trust for Education

It allows us to understand progress and areas of development

Research is vital in education because it helps us be intentional about how we frame and document our practice. At The Trust , we aim to synthesize standards-based and stakeholder-driven frames to ensure that quality also means equity.

Research gives us a lens to look across time and space and concretely understand our progress and areas for improvement. We are  careful  to include all voices through representative and network sampling to include multiple perspectives from different sites.

Good research helps us capture variation in practice, document innovation, and share bright spots and persistent challenges with peers for mutual learning and growth. 

This is key to our work as educators and a city-based voice employing and seeking to amplify asset-based discourses in education.

Research represents stakeholders’ aspirations and needs

When done in  culturally sustaining  and  equitable ways , research powerfully represents stakeholder experiences, interests, aspirations, and needs. Thus, it is critical to informed philanthropy, advocacy, and the continuous improvement of practice. 

Our organization is constantly evolving in our own cultural competence . It embodies this pursuit in our research so that the voices of the educators, families, children, and partners that we work with are harmonized .

This is done to create the “big picture” of where we are and where we need to get together to ensure equitable and quality conditions for learning in Newark.

Jessica Robinson

Jessica Robinson

Educator | Human Resources and Marketing Manager, SpeakingNerd

Research makes the problem clearer

In the words of Stanley Arnold,  “Every problem contains within itself the seeds of its own solution.”  These words truly highlight the nature of problems and solutions. 

If you understand a problem thoroughly, you eventually approach closer to the solution for you begin to see what makes the problem arise. When the root of the problem is clear, the solution becomes obvious. 

For example, if you suffer from headaches frequently, your doctor will get specific tests done to understand the exact problem (which is research). Once the root cause of the headache becomes clear, your doctor will give you suitable medicines to help you heal. 

This implies that to reach a solution, it is crucial for us to understand the problem first. Research helps us with that. By making the problem clearer, it helps us pave closer to the solution. 

As the main aim of education is to produce talented individuals who can generate innovative solutions to the world’s problems, research is of utmost importance. 

Research boosts critical thinking skills

Critical thinking is defined as observing, understanding, analyzing, and interpreting information and arguments to form suitable conclusions. 

In today’s world, critical thinking skills are the most valued skills. Companies look for a candidate’s critical thinking skills before hiring him. This is because critical thinking skills promote innovation, and innovation is the need of the hour in almost every sector. 

Further, research is one of the most effective ways of developing critical thinking skills. When you conduct research, you eventually learn the art of observing, evaluating, analyzing, interpreting information, and deriving conclusions. So, this is another major reason why research is crucial in education. 

Research promotes curiosity

In the words of Albert Einstein ,  “Curiosity is more important than knowledge.”  Now, you may wonder why so? Basically, curiosity is a strong desire to learn or know things. It motivates you to pursue an everlasting journey of learning. 

Every curious individual observes things, experiments, and learns. It seems that knowledge follows curiosity, but the vice versa is not true. An individual may gain a lot of knowledge about multiple things despite not being curious. But, then, he might not use his knowledge to engage in innovation because of the lack of curiosity. 

Hence, his knowledge might become futile, or he may just remain a bookworm. So, curiosity is more important than knowledge, and research promotes curiosity. How? 

The answer is because research helps you plunge into things. You observe what is not visible to everyone. You explore the wonders of nature and other phenomena. The more you know, the more you understand that you don’t know, which ignites curiosity. 

Research boosts confidence and self-esteem

Developing confident individuals is one of the major goals of education. When students undertake the journey of research and come up with important conclusions or results, they develop immense confidence in their knowledge and skills. 

Related: Why is Self Confidence Important?

They feel as if they can do anything. This is another important reason why research is crucial in education. 

Research helps students evolve into independent learners

Most of the time, teachers guide students on the path of learning. But, research opportunities give students chances to pave their own learning path. 

It is like they pursue a journey of learning by themselves. They consult different resources that seem appropriate, use their own methods, and shape the journey on their own. 

This way, they evolve into independent learners, which is excellent as it sets the foundation for lifelong learning. 

Theresa Bertuzzi

Theresa Bertuzzi

Chief Program Development Officer and Co-founder, Tiny Hoppers

Research helps revamp the curriculum and include proven best techniques

Research is critical in education as our world is constantly evolving, so approaches and solutions need to be updated to  best suit  the current educational climate. 

With the influx of child development and psychology studies, educators and child product development experts are  honing  how certain activities, lessons, behavior management, etc., can impact a child’s development.

For example, child development research has led to the development of toy blocks, jigsaws, and shape sorters, which have proven to be linked to: 

  • Spatial thinking
  • Logical reasoning
  • Shape and color recognition

There is  no one-size-fits-all  when approaching educational practices; therefore, we can  revamp  the curriculum and include proven best techniques and methodologies by continuously researching past strategies and looking into new tactics. 

Effective teaching requires practical evidence approaches rather than making it a guessing game. 

The combination of work done by child educators of all ages, and research in child development psychology allow new developments in toys, activities, and practical resources for other educators, child care workers, and parents. Such ensures children can  reap  the benefits of child development research. 

It enables a better understanding of how to adapt methods of instruction

In addition, with all of the various learning styles, researching the diversity in these types will enable a better understanding of how to adapt methods of instruction to all learners’ needs. 

Child development research gives educators, child care workers, and parents the ability to guide the average child at specific age ranges, but  each child is unique in their own needs . 

It is important to note that while this is the average, it is up to the educator and childcare provider to  adapt accordingly  to each child based on their individual needs. 

Scott Winstead

Scott Winstead

Education Technology Expert | Founder, My eLearning World

It’s the most important tool for expanding our knowledge

Research is an integral part of education for teachers and students alike. It’s our most important tool for expanding our knowledge and understanding of different topics and ideas.

  • Educators need to be informed about the latest research to make good decisions and provide students with quality learning opportunities.
  • Research provides educators with valuable information about how students learn best so they can be more effective teachers. 
  • It also helps us develop new methods and techniques for teaching and allows educators to explore different topics and ideas in more detail.
  • For students, research allows them to explore new topics and develop critical thinking skills along with analytical and communication skills.

In short, research is vital in education because it helps us learn more about the world around us and improves the quality of education for everyone involved.

Connor Ondriska

Connor Ondriska

CEO, SpanishVIP

It creates better experiences and improves the quality of education

Research continues to be so important in education because we should constantly be improving as educators. If one of the goals of education is to continually work on making a better world, then the face of education a century ago shouldn’t look the same today. 

You can apply that same logic on a shorter scale, especially with the technological boom . So research is a way that educators can learn about what’s working, what isn’t, and what are the areas we need to focus on. 

For example, we focus purely on distance learning, which means we need to innovate in a field that doesn’t have a ton of research yet. If we’re being generous, we can say that distance education became viable in the 1990s, but people are just now accepting it as a valid way to learn. 

Since you can’t necessarily apply everything you know about traditional pedagogy to an online setting, It’s an entirely different context that requires its own study. 

As more research comes out about the effectiveness and understanding of this type of education, we can adapt as educators to help our students. Ultimately, that research will help us create better experiences and improve the quality of distance education. 

The key here is to make sure that research is available and that teachers actually respond to it. In that sense, ongoing research and continual teacher training can go hand-in-hand. 

It leads to more effective educational approaches

Research in the field of language learning is significant. We’re constantly changing our understanding of how languages are learned. Over just the last century, there have been dozens of new methodologies and approaches. 

Linguists/pedagogues have frequently re-interpreted the language-learning process, and all of this analytical research has revolutionized the way we understand language. 

We started with simple Grammar Translation (how you would learn Latin), and now research focuses on more holistic communication techniques. So we’ve definitely come a long way, but we should keep going. 

Now with distance education, we’re experiencing another shift in language learning. You don’t need to memorize textbook vocabulary. You don’t need to travel abroad to practice with native speakers. 

Thanks to ongoing research, we’ve developed our own method of learning Spanish that’s been shown to be 10x more efficient than traditional classroom experiences. 

So if we’ve been able to do so, then maybe someone will develop an even better methodology in the future. So research and innovation are only leading to more effective educational approaches that benefit the entire society.  

Research helps everyone in the education field to become better

This stands in both the public and private sectors. Even though we’re an education business, public schools should also be adapting to new ways to utilize distance learning. 

As more technology becomes readily available to students, teachers should capitalize on that to ensure everyone receives a better education.

Related: How Important Is Technology in Education  

There is now a vast body of research about technology in the language classroom, so why not take advantage of that research and create better lesson plans? 

So as new research appears, everyone in the education field will become a better teacher. And that statement will stand ten years from now. Education needs to adapt to the needs of society, but we need research to know how we can do that appropriately .  

James Bacon, MSEd

James Bacon

Director of Outreach and Operations, Edficiency

Research gives schools confidence to adopt different practices

Research in education is important to inform teachers, administrators, and even parents about what practices have been shown to impact different outcomes that can be important, like:

  • Student learning outcomes (often measured by test scores)
  • Graduation and/or attendance rates
  • Social-emotional skills 
  • College and/or job matriculation rates, among many others

Research can give insights into which programs, teaching methods, curricula, schedules, and other structures provide which benefits to which groups and thus give schools the confidence to adopt these different practices.

It measures the impact of innovations 

Research in education also enables us to measure different innovations that are tried in schools, which is also essential to push the field of education further. 

It also ensures that students learn individually and collectively more than those we’ve educated in the past, or at least in different ways, to respond to changes and help shape society’s future. 

Research can give us the  formal feedback  to know if innovations happening in classrooms, schools, and districts across the country (and the world) are having the  intended  impact and whether or not they should be continued, expanded, discontinued, or used only in specific contexts.

Without research, we might continue to innovate to the detriment of our students and education system without knowing it.

Loic Bellet

Loic Bellet

Business English Coach, Speak Proper English

It provides numerous advantages to explore profession

Developing a research-based approach to enhance your practice gives you the evidence you need to make changes in your classroom, school, and beyond. 

In the light of the ongoing discussion over what works and why, there are numerous advantages to exploring your profession, whether for immediate improvement via action research and, more broadly, for acquiring awareness and knowledge on topics of interest and significance. 

There are several advantages to incorporating research into your practice. This is why research is a part of teacher education from the beginning. 

Research can be used to:

  • Assist you in discovering solutions to specific issues that may arise in your school or classroom.
  • Support professional knowledge, competence, and understanding of learning
  • Connect you to information sources and expert support networks.
  • When implementing change, such as curriculum, pedagogy, or assessment, it’s important to spell out the goals, processes, and objectives.
  • Improve your organizational, local, and national grasp of your professional and policy environment, allowing you to educate and lead better strategically and effectively.
  • Inside your school and more broadly within the profession, develop your agency, impact, self-efficacy, and voice.
  • Each of these may entail an investigation based on evidence out of your environment and evidence from other sources.

Although research methodologies have progressed significantly, the importance of research alone has grown . 

We’ve seen online research gaining popularity, and the value of research is increasing by the day. As a result, companies are looking for online access researchers to work with them and carry out research for accurate data from the internet. 

Furthermore, research became a requirement for survival. We’ll have to do it nonetheless. We can’t make business judgments, launch businesses, or prove theories without extensive research. There has been a lot of effort to create research a base of info and advancement.

Saikiran Chandha

Saikiran Chandha

CEO and Founder, Typeset

It offers factual or evidence-based learning approach

It’s evident that research and education are intertwined! On a broader spectrum, education is something that you perceive as a fundamental part of your learning process (in your institutions, colleges, school, etc.). 

It improves your skills, knowledge, social and moral values. But on the other hand, research is something that you owe to as it provides you with the scientific and systemic solution to your educational hardships. 

For example: Research aids in implementing different teaching methods, identifying learning difficulties and addressing them, curriculum development, and more. 

Accordingly, research plays a significant role in offering a factual or evidence-based learning approach to academic challenges and concerns. 

And the two primary benefits of research in education are:

Research helps to improve the education system

Yes, the prime focus of research is to excavate, explore and discover a new, innovative, and creative approach to enhance the teaching and learning methods based on the latest educational needs and advancements. 

Research fuels your knowledge bank

Research is all about learning new things, data sourcing, analysis, and more. So, technically, research replenishes your knowledge bank with factual data. 

Thus, it helps educators or teachers develop their subject knowledge, aids in-depth harvest erudition, and increases overall classroom performance.

Chaye McIntosh, MS, LCADC

chaye mcintosh

Clinical Director,  ChoicePoint Health

It improves the learning curve

Research, I believe, is a fundamental part of education, be it by the student or the teacher. 

When you research a topic, you will not just learn and read about stuff related to the topic but also branch out and learn new and different things. This improves the learning curve, and you delve deeper into topics, develop interest and increase your knowledge. 

Academically and personally, I can grow every day and attain the confidence that the abundance of information brings me.

It builds up understanding and perspective

Research can help you build up understanding and perspective regarding the niche of choice; help you evaluate and analyze it with sound theories and a factual basis rather than just learning just for the sake of it.

Educationally, it can help you form informed opinions and sound logic that can be beneficial in school and routinely. Not only this,  when you do proper research on any educational topic and learn about the facts and figures, chances are you will score better than your classmates who only have textbook knowledge.  

So the research will give you an edge over your peers and help you perform better in exams and classroom discussions.

Matthew Carter

Matthew Carter

Attorney,  Inc and Go

Solid research is a skill you need in all careers

That goes double for careers like mine. You might think that attorneys learn all the answers in law school, but in fact, we know how to find the answers we need through research. 

Doctors and accountants will tell you the same thing. No one can ever hold all the knowledge they need. You have to be able to find the correct answer quickly. School is the perfect place to learn that.

Research enables you to weigh sources and find the best ones

How do you know the source you have found is reliable? If you are trained in research, you’ve learned how to weigh sources and find the best ones. 

Comparing ideas and using them to draw bigger conclusions helps you not only in your career but in your life. As we have seen politically in the last few years, it enables you to be a more informed citizen.

Research makes you more persuasive

Want to have more civil conversations with your family over the holidays? Being able to dig into a body of research and pull out answers that you actually understand makes you a more effective speaker. 

People are more likely to believe you when you have formed an opinion through research rather than parroting something you saw on the news. They may even appreciate your efforts to make the conversation more logical and civil.

As for me, I spend a lot of time researching business formation now, and I use that in my writing. 

George Tsagas

George Tsagas

Owner, eMathZone

Research helps build holistic knowledge

Your background will cause you to approach a topic with a preconceived notion. When you take the time to see the full context of a situation, your perspective changes. 

Researching one topic also expands your perspective of other topics. The information you uncover when studying a particular subject can inform other tangential subjects in the future as you build a greater knowledge of the world and how connected it is. 

As a result, any initial research you do will be a building block for future studies. You will begin each subsequent research process with more information. You will continue to broaden your perspective each time.

Research helps you become more empathic

Even if you don’t change your mind on a subject, researching that topic will expose you to other points of view and help you understand why people might feel differently about a situation. 

The more knowledge you gain about how others think, the more likely you are to humanize them and be more empathetic to diverse viewpoints and backgrounds in the future.

Research teaches you how to learn

Through the research process, you discover where you have information gaps and what questions to ask in order to solve them. It helps you approach a subject with curiosity and a willingness to learn rather than thinking you have the right answer from the beginning.

Georgi Georgiev

Georgi Georgiev

Owner, GIGA calculator

It helps us learn about the status quo of existing literature

The starting point of every scientific and non-scientific paper is in-depth literature research.

It helps to:

  • gather casual evidence about a specific research topic
  • answer a specific scientific question
  • learn about the status quo of existing literature
  • identify potential problems and raise new questions

Anyone writing a scientific paper needs evidence based on facts to back up theories, hypotheses, assumptions, and claims. However, since most authors can’t derive all the evidence on their own, they have to rely on the evidence provided by existing scientific (and peer-reviewed) literature. 

Subsequently, comprehensive literature research is inevitable. Only by delving deeply into a research topic will the authors gather the data and evidence necessary for a differentiated examination of the current status quo. 

This, in turn, will allow them to develop new ideas and raise new questions. 

Craig Miller

Craig Miller

Co-Founder,  Academia Labs LLC

Research supplements knowledge gaps

In the academe, research is critical. Our daily lives revolve around research, making research an integral part of education.

If you want to know which restaurant in your area serves the best steak, you’d have to research on the internet and read reviews. If you want to see the procedure for making an omelet, you’d have to research on the internet or ask your parents. Hence, research is part of our lives, whether we want it or not.

It is no secret that there are a lot of knowledge gaps in the knowledge pool. Research is the only thing that can supplement these gaps and answer the questions with no answers.

It will also provide the correct information to long-debated questions like the shape of the Earth and the evolution of man.

With every information readily available to us with just a click and a scroll on the internet, research is crucial in identifying which data are factual and which are just fake news . More than that, it helps transfer correct information from one person to another while combating the spread of false information.

Frequently Asked Questions

What is the importance of research.

Research plays a critical role in advancing our knowledge and understanding of the world around us. Here are some key reasons why research is so important:

• Generates new knowledge : Research is a process of discovering new information and insights. It allows us to explore questions that have not yet been answered, and to generate new ideas and theories that can help us make sense of the world.

• Improves existing knowledge : Research also allows us to build on existing knowledge, by testing and refining theories, and by uncovering new evidence that supports or challenges our understanding of a particular topic.

• Drives innovation : Many of the greatest innovations in history have been driven by research. Whether it’s developing new technologies, discovering new medical treatments, or exploring new frontiers in science, research is essential for pushing the boundaries of what is possible.

• Informs decision-making : Research provides the evidence and data needed to make informed decisions. Whether it’s in business, government, or any other field, research helps us understand the pros and cons of different options, and to choose the course of action that is most likely to achieve our goals.

• Promotes critical thinking : Conducting research requires us to think critically, analyze data, and evaluate evidence. These skills are not only valuable in research, but also in many other areas of life, such as problem-solving, decision-making, and communication.

What is the ultimate goal of a research?

The ultimate goal of research is to uncover new knowledge, insights, and understanding about a particular topic or phenomenon. Through careful investigation, analysis, and interpretation of data, researchers aim to make meaningful contributions to their field of study and advance our collective understanding of the world around us.

There are many different types of research, each with its own specific goals and objectives. Some research seeks to test hypotheses or theories, while others aim to explore and describe a particular phenomenon. Still, others may be focused on developing new technologies or methods for solving practical problems.

Regardless of the specific goals of a given research project, all research shares a common aim: to generate new knowledge and insights that can help us better understand and navigate the complex world we live in.

Of course, conducting research is not always easy or straightforward.

Researchers must contend with a wide variety of challenges, including finding funding, recruiting participants, collecting and analyzing data, and interpreting their results. But despite these obstacles, the pursuit of knowledge and understanding remains a fundamental driving force behind all scientific inquiry.

How can research improve the quality of life?

Research can improve the quality of life in a variety of ways, from advancing medical treatments to informing social policies that promote equality and justice. Here are some specific examples:

• Medical research : Research in medicine and healthcare can lead to the development of new treatments, therapies, and technologies that improve health outcomes and save lives.

For example, research on vaccines and antibiotics has helped to prevent and treat infectious diseases, while research on cancer has led to new treatments and improved survival rates.

• Environmental research : Research on environmental issues can help us to understand the impact of human activities on the planet and develop strategies to mitigate and adapt to climate change.

For example, research on renewable energy sources can help to reduce greenhouse gas emissions and protect the environment for future generations.

• Social research : Research on social issues can help us to understand and address social problems such as poverty, inequality, and discrimination.

For example, research on the effects of poverty on child development can inform policies and programs that support families and promote child well-being.

• Technological research : Research on technology can lead to the development of new products and services that improve quality of life, such as assistive technologies for people with disabilities or smart home systems that promote safety and convenience.

How useful was this post?

Click on a star to rate it!

As you found this post useful...

Share it on social media!

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

Photo of author

The Editors

IE University

  • IE Corporate Relations
  • IE University

IE Insights

What Is Applied Research and Why Is It Important?

Our team of experts explain how applied research has a direct effect on businesses and organizations by answering specific questions using scientific methods.

importance of applied research in education

From Instinct to Innovation: The Future of Behavioral Science

importance of applied research in education

Germany's Challenges: Dependence and Resilience

importance of applied research in education

Talking Trends: Happiness

Would you like to receive ie insights.

Sign up for our Newsletter

RELATED CONTENT

importance of applied research in education

Innovating in the Age of Digital Transformation

How gender on boards reflects social awareness.

importance of applied research in education

How Technology Can Help with the SDGs

importance of applied research in education

Design Thinking + Legal Services

importance of applied research in education

Managing the Entrepreneurial Spirit of an Organization

Latest news.

We use both our own and third-party cookies to enhance our services and to offer you the content that most suits your preferences by analysing your browsing habits. Your continued use of the site means that you accept these cookies. You may change your settings and obtain more information here .

Privacy Preference Center

Consent management, web analytics.

We use both our own and third-party cookies to enhance our services and to offer you the content that most suits your preferences by analysing your browsing habits. Your continued use of the site means that you accept these cookies.

Cookies Used

Share on Mastodon

What is STEM Education?

STEM education, now also know as STEAM, is a multi-discipline approach to teaching.

STEM education combines science, technology, engineering and math.

  • Importance of STEAM education

STEAM blended learning

  • Inequalities in STEAM

Additional resources

Bibliography.

STEM education is a teaching approach that combines science, technology, engineering and math . Its recent successor, STEAM, also incorporates the arts, which have the "ability to expand the limits of STEM education and application," according to Stem Education Guide . STEAM is designed to encourage discussions and problem-solving among students, developing both practical skills and appreciation for collaborations, according to the Institution for Art Integration and STEAM .

Rather than teach the five disciplines as separate and discrete subjects, STEAM integrates them into a cohesive learning paradigm based on real-world applications. 

According to the U.S. Department of Education "In an ever-changing, increasingly complex world, it's more important than ever that our nation's youth are prepared to bring knowledge and skills to solve problems, make sense of information, and know how to gather and evaluate evidence to make decisions." 

In 2009, the Obama administration announced the " Educate to Innovate " campaign to motivate and inspire students to excel in STEAM subjects. This campaign also addresses the inadequate number of teachers skilled to educate in these subjects. 

The Department of Education now offers a number of STEM-based programs , including research programs with a STEAM emphasis, STEAM grant selection programs and general programs that support STEAM education.

In 2020, the U.S. Department of Education awarded $141 million in new grants and $437 million to continue existing STEAM projects a breakdown of grants can be seen in their investment report .  

The importance of STEM and STEAM education

STEAM education is crucial to meet the needs of a changing world.

STEAM education is crucial to meet the needs of a changing world. According to an article from iD Tech , millions of STEAM jobs remain unfilled in the U.S., therefore efforts to fill this skill gap are of great importance. According to a report from the U.S. Bureau of Labor Statistics there is a projected growth of STEAM-related occupations of 10.5% between 2020 and 2030 compared to 7.5% in non-STEAM-related occupations. The median wage in 2020 was also higher in STEAM occupations ($89,780) compared to non-STEAM occupations ($40,020).

Between 2014 and 2024, employment in computer occupations is projected to increase by 12.5 percent between 2014 and 2024, according to a STEAM occupation report . With projected increases in STEAM-related occupations, there needs to be an equal increase in STEAM education efforts to encourage students into these fields otherwise the skill gap will continue to grow. 

STEAM jobs do not all require higher education or even a college degree. Less than half of entry-level STEAM jobs require a bachelor's degree or higher, according to skills gap website Burning Glass Technologies . However, a four-year degree is incredibly helpful with salary — the average advertised starting salary for entry-level STEAM jobs with a bachelor's requirement was 26 percent higher than jobs in the non-STEAM fields.. For every job posting for a bachelor's degree recipient in a non-STEAM field, there were 2.5 entry-level job postings for a bachelor's degree recipient in a STEAM field. 

What separates STEAM from traditional science and math education is the blended learning environment and showing students how the scientific method can be applied to everyday life. It teaches students computational thinking and focuses on the real-world applications of problem-solving. As mentioned before, STEAM education begins while students are very young:

Elementary school — STEAM education focuses on the introductory level STEAM courses, as well as awareness of the STEAM fields and occupations. This initial step provides standards-based structured inquiry-based and real-world problem-based learning, connecting all four of the STEAM subjects. The goal is to pique students' interest into them wanting to pursue the courses, not because they have to. There is also an emphasis placed on bridging in-school and out-of-school STEAM learning opportunities. 

– Best microscopes for kids

– What is a scientific theory?

– Science experiments for kids  

Middle school — At this stage, the courses become more rigorous and challenging. Student awareness of STEAM fields and occupations is still pursued, as well as the academic requirements of such fields. Student exploration of STEAM-related careers begins at this level, particularly for underrepresented populations. 

High school — The program of study focuses on the application of the subjects in a challenging and rigorous manner. Courses and pathways are now available in STEAM fields and occupations, as well as preparation for post-secondary education and employment. More emphasis is placed on bridging in-school and out-of-school STEAM opportunities.

Much of the STEAM curriculum is aimed toward attracting underrepresented populations. There is a significant disparity in the female to male ratio when it comes to those employed in STEAM fields, according to Stem Women . Approximately 1 in 4 STEAM graduates is female.  

Much of the STEAM curriculum is aimed toward attracting underrepresented communities.

Inequalities in STEAM education

Ethnically, people from Black backgrounds in STEAM education in the UK have poorer degree outcomes and lower rates of academic career progression compared to other ethnic groups, according to a report from The Royal Society . Although the proportion of Black students in STEAM higher education has increased over the last decade, they are leaving STEAM careers at a higher rate compared to other ethnic groups. 

"These reports highlight the challenges faced by Black researchers, but we also need to tackle the wider inequalities which exist across our society and prevent talented people from pursuing careers in science." President of the Royal Society, Sir Adrian Smith said. 

Asian students typically have the highest level of interest in STEAM. According to the Royal Society report in 2018/19 18.7% of academic staff in STEAM were from ethnic minority groups, of these groups 13.2% were Asian compared to 1.7% who were Black. 

If you want to learn more about why STEAM is so important check out this informative article from the University of San Diego . Explore some handy STEAM education teaching resources courtesy of the Resilient Educator . Looking for tips to help get children into STEAM? Forbes has got you covered.  

  • Lee, Meggan J., et al. ' If you aren't White, Asian or Indian, you aren't an engineer': racial microaggressions in STEM education. " International Journal of STEM Education 7.1 (2020): 1-16. 
  • STEM Occupations: Past, Present, And Future . Stella Fayer, Alan Lacey, and Audrey Watson. A report. 2017. 
  • Institution for Art Integration and STEAM What is STEAM education? 
  • Barone, Ryan, ' The state of STEM education told through 18 stats ', iD Tech.  
  • U.S. Department of Education , Science, Technology, Engineering, and Math, including Computer Science.  
  • ' STEM sector must step up and end unacceptable disparities in Black staff ', The Royal Society. A report, March 25, 2021.  
  • 'Percentages of Women in STEM Statistics' Stemwomen.com  

Sign up for the Live Science daily newsletter now

Get the world’s most fascinating discoveries delivered straight to your inbox.

Newfound dinosaur with giant, horned headpiece named after iconic Norse god

This AI-powered robot has worked out how to solve a Rubik's Cube in just 0.305 seconds

Supermassive black hole roars to life before astronomers' eyes in world-1st observations

Most Popular

  • 2 Y chromosome is evolving faster than the X, primate study reveals
  • 3 Ming dynasty shipwrecks hide a treasure trove of artifacts in the South China Sea, excavation reveals
  • 4 Gulf Stream's fate to be decided by climate 'tug-of-war'
  • 5 Astronomers discover the 1st-ever merging galaxy cores at cosmic dawn
  • 2 '1st of its kind': NASA spots unusually light-colored boulder on Mars that may reveal clues of the planet's past
  • 3 Newly deciphered papyrus describes 'miracle' performed by 5-year-old Jesus
  • 4 Long-lost Assyrian military camp devastated by 'the angel of the Lord' finally found, scientist claims
  • 5 Tuberculosis triggered giant, crusty wart to sprout on man's hand

importance of applied research in education

importance of applied research in education

McKinsey Technology Trends Outlook 2023

After a tumultuous 2022 for technology investment and talent, the first half of 2023 has seen a resurgence of enthusiasm about technology’s potential to catalyze progress in business and society. Generative AI deserves much of the credit for ushering in this revival, but it stands as just one of many advances on the horizon that could drive sustainable, inclusive growth and solve complex global challenges.

To help executives track the latest developments, the McKinsey Technology Council  has once again identified and interpreted the most significant technology trends unfolding today. While many trends are in the early stages of adoption and scale, executives can use this research to plan ahead by developing an understanding of potential use cases and pinpointing the critical skills needed as they hire or upskill talent to bring these opportunities to fruition.

Our analysis examines quantitative measures of interest, innovation, and investment to gauge the momentum of each trend. Recognizing the long-term nature and interdependence of these trends, we also delve into underlying technologies, uncertainties, and questions surrounding each trend. This year, we added an important new dimension for analysis—talent. We provide data on talent supply-and-demand dynamics for the roles of most relevance to each trend. (For more, please see the sidebar, “Research methodology.”)

New and notable

All of last year’s 14 trends remain on our list, though some experienced accelerating momentum and investment, while others saw a downshift. One new trend, generative AI, made a loud entrance and has already shown potential for transformative business impact.

Research methodology

To assess the development of each technology trend, our team collected data on five tangible measures of activity: search engine queries, news publications, patents, research publications, and investment. For each measure, we used a defined set of data sources to find occurrences of keywords associated with each of the 15 trends, screened those occurrences for valid mentions of activity, and indexed the resulting numbers of mentions on a 0–1 scoring scale that is relative to the trends studied. The innovation score combines the patents and research scores; the interest score combines the news and search scores. (While we recognize that an interest score can be inflated by deliberate efforts to stimulate news and search activity, we believe that each score fairly reflects the extent of discussion and debate about a given trend.) Investment measures the flows of funding from the capital markets into companies linked with the trend. Data sources for the scores include the following:

  • Patents. Data on patent filings are sourced from Google Patents.
  • Research. Data on research publications are sourced from the Lens (www.lens.org).
  • News. Data on news publications are sourced from Factiva.
  • Searches. Data on search engine queries are sourced from Google Trends.
  • Investment. Data on private-market and public-market capital raises are sourced from PitchBook.
  • Talent demand. Number of job postings is sourced from McKinsey’s proprietary Organizational Data Platform, which stores licensed, de-identified data on professional profiles and job postings. Data is drawn primarily from English-speaking countries.

In addition, we updated the selection and definition of trends from last year’s study to reflect the evolution of technology trends:

  • The generative-AI trend was added since last year’s study.
  • We adjusted the definitions of electrification and renewables (previously called future of clean energy) and climate technologies beyond electrification and renewables (previously called future of sustainable consumption).
  • Data sources were updated. This year, we included only closed deals in PitchBook data, which revised downward the investment numbers for 2018–22. For future of space technologies investments, we used research from McKinsey’s Aerospace & Defense Practice.

This new entrant represents the next frontier of AI. Building upon existing technologies such as applied AI and industrializing machine learning, generative AI has high potential and applicability across most industries. Interest in the topic (as gauged by news and internet searches) increased threefold from 2021 to 2022. As we recently wrote, generative AI and other foundational models  change the AI game by taking assistive technology to a new level, reducing application development time, and bringing powerful capabilities to nontechnical users. Generative AI is poised to add as much as $4.4 trillion in economic value from a combination of specific use cases and more diffuse uses—such as assisting with email drafts—that increase productivity. Still, while generative AI can unlock significant value, firms should not underestimate the economic significance and the growth potential that underlying AI technologies and industrializing machine learning can bring to various industries.

Investment in most tech trends tightened year over year, but the potential for future growth remains high, as further indicated by the recent rebound in tech valuations. Indeed, absolute investments remained strong in 2022, at more than $1 trillion combined, indicating great faith in the value potential of these trends. Trust architectures and digital identity grew the most out of last year’s 14 trends, increasing by nearly 50 percent as security, privacy, and resilience become increasingly critical across industries. Investment in other trends—such as applied AI, advanced connectivity, and cloud and edge computing—declined, but that is likely due, at least in part, to their maturity. More mature technologies can be more sensitive to short-term budget dynamics than more nascent technologies with longer investment time horizons, such as climate and mobility technologies. Also, as some technologies become more profitable, they can often scale further with lower marginal investment. Given that these technologies have applications in most industries, we have little doubt that mainstream adoption will continue to grow.

Organizations shouldn’t focus too heavily on the trends that are garnering the most attention. By focusing on only the most hyped trends, they may miss out on the significant value potential of other technologies and hinder the chance for purposeful capability building. Instead, companies seeking longer-term growth should focus on a portfolio-oriented investment across the tech trends most important to their business. Technologies such as cloud and edge computing and the future of bioengineering have shown steady increases in innovation and continue to have expanded use cases across industries. In fact, more than 400 edge use cases across various industries have been identified, and edge computing is projected to win double-digit growth globally over the next five years. Additionally, nascent technologies, such as quantum, continue to evolve and show significant potential for value creation. Our updated analysis for 2023 shows that the four industries likely to see the earliest economic impact from quantum computing—automotive, chemicals, financial services, and life sciences—stand to potentially gain up to $1.3 trillion in value by 2035. By carefully assessing the evolving landscape and considering a balanced approach, businesses can capitalize on both established and emerging technologies to propel innovation and achieve sustainable growth.

Tech talent dynamics

We can’t overstate the importance of talent as a key source in developing a competitive edge. A lack of talent is a top issue constraining growth. There’s a wide gap between the demand for people with the skills needed to capture value from the tech trends and available talent: our survey of 3.5 million job postings in these tech trends found that many of the skills in greatest demand have less than half as many qualified practitioners per posting as the global average. Companies should be on top of the talent market, ready to respond to notable shifts and to deliver a strong value proposition to the technologists they hope to hire and retain. For instance, recent layoffs in the tech sector may present a silver lining for other industries that have struggled to win the attention of attractive candidates and retain senior tech talent. In addition, some of these technologies will accelerate the pace of workforce transformation. In the coming decade, 20 to 30 percent of the time that workers spend on the job could be transformed by automation technologies, leading to significant shifts in the skills required to be successful. And companies should continue to look at how they can adjust roles or upskill individuals to meet their tailored job requirements. Job postings in fields related to tech trends grew at a very healthy 15 percent between 2021 and 2022, even though global job postings overall decreased by 13 percent. Applied AI and next-generation software development together posted nearly one million jobs between 2018 and 2022. Next-generation software development saw the most significant growth in number of jobs (exhibit).

Job posting for fields related to tech trends grew by 400,000 between 2021 and 2022, with generative AI growing the fastest.

Image description:

Small multiples of 15 slope charts show the number of job postings in different fields related to tech trends from 2021 to 2022. Overall growth of all fields combined was about 400,000 jobs, with applied AI having the most job postings in 2022 and experiencing a 6% increase from 2021. Next-generation software development had the second-highest number of job postings in 2022 and had 29% growth from 2021. Other categories shown, from most job postings to least in 2022, are as follows: cloud and edge computing, trust architecture and digital identity, future of mobility, electrification and renewables, climate tech beyond electrification and renewables, advanced connectivity, immersive-reality technologies, industrializing machine learning, Web3, future of bioengineering, future of space technologies, generative AI, and quantum technologies.

End of image description.

This bright outlook for practitioners in most fields highlights the challenge facing employers who are struggling to find enough talent to keep up with their demands. The shortage of qualified talent has been a persistent limiting factor in the growth of many high-tech fields, including AI, quantum technologies, space technologies, and electrification and renewables. The talent crunch is particularly pronounced for trends such as cloud computing and industrializing machine learning, which are required across most industries. It’s also a major challenge in areas that employ highly specialized professionals, such as the future of mobility and quantum computing (see interactive).

Michael Chui is a McKinsey Global Institute partner in McKinsey’s Bay Area office, where Mena Issler is an associate partner, Roger Roberts  is a partner, and Lareina Yee  is a senior partner.

The authors wish to thank the following McKinsey colleagues for their contributions to this research: Bharat Bahl, Soumya Banerjee, Arjita Bhan, Tanmay Bhatnagar, Jim Boehm, Andreas Breiter, Tom Brennan, Ryan Brukardt, Kevin Buehler, Zina Cole, Santiago Comella-Dorda, Brian Constantine, Daniela Cuneo, Wendy Cyffka, Chris Daehnick, Ian De Bode, Andrea Del Miglio, Jonathan DePrizio, Ivan Dyakonov, Torgyn Erland, Robin Giesbrecht, Carlo Giovine, Liz Grennan, Ferry Grijpink, Harsh Gupta, Martin Harrysson, David Harvey, Kersten Heineke, Matt Higginson, Alharith Hussin, Tore Johnston, Philipp Kampshoff, Hamza Khan, Nayur Khan, Naomi Kim, Jesse Klempner, Kelly Kochanski, Matej Macak, Stephanie Madner, Aishwarya Mohapatra, Timo Möller, Matt Mrozek, Evan Nazareth, Peter Noteboom, Anna Orthofer, Katherine Ottenbreit, Eric Parsonnet, Mark Patel, Bruce Philp, Fabian Queder, Robin Riedel, Tanya Rodchenko, Lucy Shenton, Henning Soller, Naveen Srikakulam, Shivam Srivastava, Bhargs Srivathsan, Erika Stanzl, Brooke Stokes, Malin Strandell-Jansson, Daniel Wallance, Allen Weinberg, Olivia White, Martin Wrulich, Perez Yeptho, Matija Zesko, Felix Ziegler, and Delphine Zurkiya.

They also wish to thank the external members of the McKinsey Technology Council.

This interactive was designed, developed, and edited by McKinsey Global Publishing’s Nayomi Chibana, Victor Cuevas, Richard Johnson, Stephanie Jones, Stephen Landau, LaShon Malone, Kanika Punwani, Katie Shearer, Rick Tetzeli, Sneha Vats, and Jessica Wang.

Explore a career with us

Related articles.

A profile of a woman with her hand up to her chin in a thoughtful pose.  A galaxy bursting with light is superimposed over profile, centered over her mind.

McKinsey Technology Trends Outlook 2022

illustration two females standing in metaverse

Value creation in the metaverse

illustration of eye in dots

Quantum computing funding remains strong, but talent gap raises concern

Research Partnerships Manager

Historic england.

  • Closing: 11:59pm, 7th Jul 2024 BST

7.96 (5718)

Job Description

We are the public body that looks after England’s historic environment. We champion historic places, helping people understand, value and care for them.

Historic England have a fantastic opportunity for you to join us as our Research Partnerships Manager.

The location of this role is National where we offer hybrid working, you will be based in one of our following offices and from home - Newcastle, York, Manchester, Birmingham, Swindon, Bristol, Portsmouth and Cambridge.

What you will be doing

The primary purpose of this post is to facilitate the development of successful partnerships with Higher Education Institutions (HEIs), Independent Research Organisations(IROs) and research funding bodies, and to identify and support the submission of high quality proposals to research funding streams to help grow research opportunities and income.

Working closely with the Head of Research Development, you will engage  with research staff across Historic England and externally to match our research expertise and priorities to funding opportunities in the context of a research engagement strategy. You will oversee our Arts and Humanities Research Council–funded Collaborative Doctoral Partnership (CDP) programme, review and monitor our developing external research partnerships, and make best use of the opportunity provided by external research funding.

You will work closely with our Research Manager. You will take a lead on pre-award activities, but may be required to support other areas of the research funding process as required.

Who we are looking for

A high quality qualification in a relevant subject or equivalent relevant experience

A good understanding of the Higher Education sector and the research funding landscape

Demonstrable commitment to student development

Experience in developing successful partnerships between organisations in a research context

Experience in programme and/or project management and budget development

Good negotiating and influencing skills

Aptitude for strategic thinking

We are an equal opportunity employer which values diversity and inclusion. If you have a disability or neurodiversity, we would be happy to discuss reasonable adjustments to the job with you. Having just won the Gold Award from MIND, we also recognise the importance of a healthy work-life balance.

We are an inclusive employer and believe that flexible working options are for everyone. We want to make sure our working arrangements don’t prevent anyone from joining us because of their personal circumstances. We also want to provide you with the best balance in your home and work life that we can.

We are open to considering options including job sharing, part-time working, compressed hours working and different working locations, including hybrid working. Please visit our jobs pages or contact us to find out more.

Why work for Historic England

We offer a wide benefits package including a competitive pension scheme starting at 28% contributions, a generous 28 days holiday, corporate discounts, free entry into English Heritage sites across the country and development opportunities to ensure you achieve your goals.

We are committed to promoting equality of opportunity for everyone. Diversity helps us to perform better and attract more people to support our work. We welcome and encourage job applications from people of all backgrounds.

We particularly encourage applications from Black, Asian and Minority Ethnic candidates and candidates with disabilities as they are underrepresented within Historic England at this level.

Historic England want all of our candidates to shine in the recruitment process. Please tell us what we can do to make sure you can show us your very best self. You can contact us by email at [email protected] if you have any recruitment queries. 

To ensure a fair and inclusive recruitment process for everyone the use of AI or automated tools is not permitted.

Interview dates: Provisional 25th July 2024 - Virtual

Please follow the link for a full copy of the Job Description –

https://historicengland.org.uk/media/0lxjsewc/research-partnerships-manager-updated.odt

Know someone great for this?

share by email

Removing bias from the hiring process

Start your de-biased application

  • Your application will be anonymously reviewed by our hiring team to ensure fairness
  • You won't need a CV to apply to this job
  • School of Engineering and Applied Sciences >
  • Research & Faculty >
  • Our Faculty >
  • Faculty Directory >
  • Ramasubramanian, Melur K.

Melur K. Ramasubramanian

Ramasubramanian.

Research Topics

Microfluidic systems using advanced biomaterials and biological processes

Contact Information

353 Broadway

Albany NY, 12246

[email protected]

  • 1987 PhD in Mechanical Engineering, Syracuse University
  • 1983 MS in Applied Sciences, Miami University
  • 1981 BS in Mechanical Engineering, National Institute of Technology, Durgapur, India

Awards and Honors

  • American Institute for Medical and Biological Engineering, Fellow (2013)
  • Technical Association of Pulp and Paper Industry, Fellow (2011)
  • NSF Director’s Award (2011)
  • American Society of Mechanical Engineers, Fellow (2010)

IMAGES

  1. The scientific research Deanship organizes a lecture about the

    importance of applied research in education

  2. Applied Science in Management: A Guide to Its Principles and Practices

    importance of applied research in education

  3. Applied Science in Management: A Guide to Its Principles and Practices

    importance of applied research in education

  4. Application of GIS in Civil Engineering

    importance of applied research in education

  5. Application of GIS in Civil Engineering

    importance of applied research in education

  6. The Importance of Self-Reflection as a Learning Tool: Education Book

    importance of applied research in education

VIDEO

  1. Difference between Basic research And Applied research

  2. Introduction to Applied Biology

  3. Research, Educational research

  4. Basic versus Applied Research

  5. Michael Basseches: Possible New Directions for Research on Dialectical Thinking

  6. Importance of educational research| शैक्षिक अनुसन्धानको महत्व ।

COMMENTS

  1. PDF The Vital Role of Research in Improving Education

    The Value of Education Research States and the federal government have a legal and ethical obligation to provide high-quality educational opportunities for their students. Far from being unrelated to states' and districts' core education functions, research plays a unique and integral role in identifying best practices, applying resources

  2. Putting Applied Research to Work in Your School or District

    Conducting applied research helps education leaders make contextually relevant and informed decisions that lead to more holistic and effective approaches to student learning, psychosocial development, capacity building, and professional development in schools and districts so that the strategies better meet the needs of students, teachers ...

  3. PDF A Beginner's Guide to Applied Educational Research using ...

    Peel, Karen L. (2020) "A Beginner's Guide to Applied Educational Research using Thematic Analysis," Practical Assessment, Research, and Evaluation: Vol. 25 , Article 2. This Article is brought to you for free and open access by ScholarWorks@UMass Amherst. It has been accepted for inclusion in Practical Assessment, Research, and Evaluation by an ...

  4. What is it good for? Basic versus applied research

    Applied research involves applying existing knowledge to create solutions to specific problems. An example is developing a treatment for a disease. ... Both types of research are important, but basic research can be perceived negatively in the eyes of the public. ... International Journal of STEM Education, 2:5. doi: 10.1186/s40594-015-0020-1 ...

  5. Achieving Better Educational Practices Through Research Evidence: A

    Clearly, the evidence movement brings many important benefits to educational research and practice. On the positive side, there appears to be elevated interest by practitioners in identifying and purchasing educational programs backed by credible research evidence than was the case in the past (Morrison, Ross, & Cheung, 2019).

  6. Using Research and Reason in Education: How Teachers Can Use ...

    Teachers as independent evaluators of research evidence. One factor that has impeded teachers from being active and effective consumers of educational science has been a lack of orientation and training in how to understand the scientific process and how that process results in the cumulative growth of knowledge that leads to validated educational practice.

  7. Using Research to Improve Teaching

    Teachers and researchers should work collaboratively to improve student learning. Though researchers in higher education typically conduct formal research and publish their work in journal articles, it's important for teachers to also see themselves as researchers. They engage in qualitative analysis while circulating the room to examine and ...

  8. Introduction to Education Research

    Case examples were presented to better describe how Glassick's criteria can be applied in real-life research and scholarship process. Education research should be an important component of a scholarly instructor; therefore, instructors are encouraged to apply Glassick's criteria to assess their ongoing or future research endeavors so that ...

  9. Assessing the Quality of Education Research Through Its Relevance to

    Federal education policies such as the No Child Left Behind Act (NCLB) and the Every Student Succeeds Act (ESSA) promote the use of evidence in education policymaking (Arce-Trigatti et al., 2018; Penuel et al., 2017; Wentworth et al., 2017).The federal government has also played an important role in funding knowledge utilization centers in the past decade with an emphasis on measuring research ...

  10. 1 Introduction

    This bill includes definitions—crafted in the political milieu—of scientific concepts to be applied to education research, reflecting yet again a skepticism about the quality of current scholarship. (We discuss these definitions briefly in Chapter 6.) Our report is specifically intended to provide an articulation of the core nature of ...

  11. What are the benefits of educational research for teachers?

    As research becomes embedded in your practice you can gain a range of benefits. Research can: help you find solutions to particular problems arising in your classroom or school. underpin professional learning of knowledge, skills and understanding. connect you with sources of information and networks of professional support.

  12. Research in Education: Sage Journals

    Research in Education provides a space for fully peer-reviewed, critical, trans-disciplinary, debates on theory, policy and practice in relation to Education. International in scope, we publish challenging, well-written and theoretically innovative contributions that question and explore the concept, practice and institution of Education as an object of study.

  13. Applications of Cognitive Neuroscience in Educational Research

    It is important to note that not all educational research is at the same level of resolution and, as a result, cognitive neuroscience methods are not relevant for all educational research. It is only when a micro-level of understanding is required that cognitive neuroscience methods can be applied to educational research. This is particularly ...

  14. Journal of Applied Research in Higher Education

    Issue 1 2021. Volume 12. Issue 5 2020. Issue 4 2020 The Effective Teacher: Theory and Research on Instructors' Motivation in Higher Education. Issue 3 2020. Issue 2 2020. Issue 1 2020 Supporting Higher Education Students Through Analytics Systems. Volume 11. Issue 4 2019.

  15. The effectiveness of applied learning: an empirical evaluation using

    5.2 Scope for future research. Applied learning has a vast potential in the fields of training and education. The scope for this research has been limited to one topic area (SCM) and one tool of applied learning (role playing). ... And finally, long-term retention of knowledge gained in the classroom may be of even greater importance to meeting ...

  16. What is Applied Research? Definition, Types, Examples

    Importance of Applied Research. Applied research holds immense significance across various fields and industries. Here's a list of reasons why applied research is crucial: ... Education and Applied Research: Flipped Classroom Model. Example: The Flipped Classroom Model is an applied research-based teaching approach that has gained popularity in ...

  17. (PDF) Significance of Research in Education

    Research in education is use of the methods of scientific analysis to produce information, needed to make improvements in educational planning, decision making, teaching and. learning, curriculum ...

  18. What is Applied Research? + [Types, Examples & Method]

    Applied research in education is also more utilitarian as it gathers practical evidence that can inform pragmatic solutions to problems. ... Applied research is an important research approach because it helps organisations to arrive at practical solutions to specific problems while improving their productivity and output. Unlike basic research ...

  19. What is Educational Research? + [Types, Scope & Importance]

    Importance of Educational Research. Educational research plays a crucial role in knowledge advancement across different fields of study. It provides answers to practical educational challenges using scientific methods. Findings from educational research; especially applied research, are instrumental in policy reformulation.

  20. Applied Research: Definition, Types & Examples

    Applied study in education; ... Applied research is an important way to research because it helps organizations find real-world solutions to specific problems while also increasing their output and productivity. In contrast to basic research, which focuses on making theories that explain things, applied research focuses on describing evidence ...

  21. 40+ Reasons Why Research Is Important in Education

    Research gives us better knowledge workers. There is a tremendous value for our society from student participation in scientific research. At all levels - undergraduate, master's, and Ph.D. —students learn the scientific method that has driven progress since the Enlightenment over 300 years ago.. They learn to observe carefully and organize collected data efficiently.

  22. PDF Importance and Necessity of Research in Education

    Need and importance Research in education as in the other fields is essential for providing useful and dependable knowledge through which the process of education can be made more effective. There are various considerations which emphasize need for research in education (Best, 1998). ... Applied research: Applied research focuses on analysing ...

  23. What Is Applied Research and Why Is It Important?

    October 18, 2021 Future of Education Research. By: Fabrizio Salvador Teresa Martín-Retortillo Marco S. Giarratana Santiago Íñiguez. Our team of experts explain how applied research has a direct effect on businesses and organizations by answering specific questions using scientific methods. Find out how applied research has a direct effects ...

  24. What is STEM Education?

    The importance of STEM and STEAM education. ... science and math education is the blended learning environment and showing students how the scientific method can be applied to everyday life ...

  25. McKinsey Technology Trends Outlook 2023

    After a tumultuous 2022 for technology investment and talent, the first half of 2023 has seen a resurgence of enthusiasm about technology's potential to catalyze progress in business and society.Generative AI deserves much of the credit for ushering in this revival, but it stands as just one of many advances on the horizon that could drive sustainable, inclusive growth and solve complex ...

  26. Research Partnerships Manager

    SummaryWe are the public body that looks after England's historic environment. We champion historic places, helping people understand, value and care for them.Historic England have a fantastic opportunity for you to join us as our Research Partnerships Manager.The location of this role is National where we offer hybrid working, you will be based in one of our following offices and from home ...

  27. Ramasubramanian, Melur K.

    American Institute for Medical and Biological Engineering, Fellow (2013) Technical Association of Pulp and Paper Industry, Fellow (2011) NSF Director's Award (2011)