Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Guide to Experimental Design | Overview, Steps, & Examples

Guide to Experimental Design | Overview, 5 steps & Examples

Published on December 3, 2019 by Rebecca Bevans . Revised on June 21, 2023.

Experiments are used to study causal relationships . You manipulate one or more independent variables and measure their effect on one or more dependent variables.

Experimental design create a set of procedures to systematically test a hypothesis . A good experimental design requires a strong understanding of the system you are studying.

There are five key steps in designing an experiment:

  • Consider your variables and how they are related
  • Write a specific, testable hypothesis
  • Design experimental treatments to manipulate your independent variable
  • Assign subjects to groups, either between-subjects or within-subjects
  • Plan how you will measure your dependent variable

For valid conclusions, you also need to select a representative sample and control any  extraneous variables that might influence your results. If random assignment of participants to control and treatment groups is impossible, unethical, or highly difficult, consider an observational study instead. This minimizes several types of research bias, particularly sampling bias , survivorship bias , and attrition bias as time passes.

Table of contents

Step 1: define your variables, step 2: write your hypothesis, step 3: design your experimental treatments, step 4: assign your subjects to treatment groups, step 5: measure your dependent variable, other interesting articles, frequently asked questions about experiments.

You should begin with a specific research question . We will work with two research question examples, one from health sciences and one from ecology:

To translate your research question into an experimental hypothesis, you need to define the main variables and make predictions about how they are related.

Start by simply listing the independent and dependent variables .

Research question Independent variable Dependent variable
Phone use and sleep Minutes of phone use before sleep Hours of sleep per night
Temperature and soil respiration Air temperature just above the soil surface CO2 respired from soil

Then you need to think about possible extraneous and confounding variables and consider how you might control  them in your experiment.

Extraneous variable How to control
Phone use and sleep in sleep patterns among individuals. measure the average difference between sleep with phone use and sleep without phone use rather than the average amount of sleep per treatment group.
Temperature and soil respiration also affects respiration, and moisture can decrease with increasing temperature. monitor soil moisture and add water to make sure that soil moisture is consistent across all treatment plots.

Finally, you can put these variables together into a diagram. Use arrows to show the possible relationships between variables and include signs to show the expected direction of the relationships.

Diagram of the relationship between variables in a sleep experiment

Here we predict that increasing temperature will increase soil respiration and decrease soil moisture, while decreasing soil moisture will lead to decreased soil respiration.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Now that you have a strong conceptual understanding of the system you are studying, you should be able to write a specific, testable hypothesis that addresses your research question.

Null hypothesis (H ) Alternate hypothesis (H )
Phone use and sleep Phone use before sleep does not correlate with the amount of sleep a person gets. Increasing phone use before sleep leads to a decrease in sleep.
Temperature and soil respiration Air temperature does not correlate with soil respiration. Increased air temperature leads to increased soil respiration.

The next steps will describe how to design a controlled experiment . In a controlled experiment, you must be able to:

  • Systematically and precisely manipulate the independent variable(s).
  • Precisely measure the dependent variable(s).
  • Control any potential confounding variables.

If your study system doesn’t match these criteria, there are other types of research you can use to answer your research question.

How you manipulate the independent variable can affect the experiment’s external validity – that is, the extent to which the results can be generalized and applied to the broader world.

First, you may need to decide how widely to vary your independent variable.

  • just slightly above the natural range for your study region.
  • over a wider range of temperatures to mimic future warming.
  • over an extreme range that is beyond any possible natural variation.

Second, you may need to choose how finely to vary your independent variable. Sometimes this choice is made for you by your experimental system, but often you will need to decide, and this will affect how much you can infer from your results.

  • a categorical variable : either as binary (yes/no) or as levels of a factor (no phone use, low phone use, high phone use).
  • a continuous variable (minutes of phone use measured every night).

How you apply your experimental treatments to your test subjects is crucial for obtaining valid and reliable results.

First, you need to consider the study size : how many individuals will be included in the experiment? In general, the more subjects you include, the greater your experiment’s statistical power , which determines how much confidence you can have in your results.

Then you need to randomly assign your subjects to treatment groups . Each group receives a different level of the treatment (e.g. no phone use, low phone use, high phone use).

You should also include a control group , which receives no treatment. The control group tells us what would have happened to your test subjects without any experimental intervention.

When assigning your subjects to groups, there are two main choices you need to make:

  • A completely randomized design vs a randomized block design .
  • A between-subjects design vs a within-subjects design .

Randomization

An experiment can be completely randomized or randomized within blocks (aka strata):

  • In a completely randomized design , every subject is assigned to a treatment group at random.
  • In a randomized block design (aka stratified random design), subjects are first grouped according to a characteristic they share, and then randomly assigned to treatments within those groups.
Completely randomized design Randomized block design
Phone use and sleep Subjects are all randomly assigned a level of phone use using a random number generator. Subjects are first grouped by age, and then phone use treatments are randomly assigned within these groups.
Temperature and soil respiration Warming treatments are assigned to soil plots at random by using a number generator to generate map coordinates within the study area. Soils are first grouped by average rainfall, and then treatment plots are randomly assigned within these groups.

Sometimes randomization isn’t practical or ethical , so researchers create partially-random or even non-random designs. An experimental design where treatments aren’t randomly assigned is called a quasi-experimental design .

Between-subjects vs. within-subjects

In a between-subjects design (also known as an independent measures design or classic ANOVA design), individuals receive only one of the possible levels of an experimental treatment.

In medical or social research, you might also use matched pairs within your between-subjects design to make sure that each treatment group contains the same variety of test subjects in the same proportions.

In a within-subjects design (also known as a repeated measures design), every individual receives each of the experimental treatments consecutively, and their responses to each treatment are measured.

Within-subjects or repeated measures can also refer to an experimental design where an effect emerges over time, and individual responses are measured over time in order to measure this effect as it emerges.

Counterbalancing (randomizing or reversing the order of treatments among subjects) is often used in within-subjects designs to ensure that the order of treatment application doesn’t influence the results of the experiment.

Between-subjects (independent measures) design Within-subjects (repeated measures) design
Phone use and sleep Subjects are randomly assigned a level of phone use (none, low, or high) and follow that level of phone use throughout the experiment. Subjects are assigned consecutively to zero, low, and high levels of phone use throughout the experiment, and the order in which they follow these treatments is randomized.
Temperature and soil respiration Warming treatments are assigned to soil plots at random and the soils are kept at this temperature throughout the experiment. Every plot receives each warming treatment (1, 3, 5, 8, and 10C above ambient temperatures) consecutively over the course of the experiment, and the order in which they receive these treatments is randomized.

Finally, you need to decide how you’ll collect data on your dependent variable outcomes. You should aim for reliable and valid measurements that minimize research bias or error.

Some variables, like temperature, can be objectively measured with scientific instruments. Others may need to be operationalized to turn them into measurable observations.

  • Ask participants to record what time they go to sleep and get up each day.
  • Ask participants to wear a sleep tracker.

How precisely you measure your dependent variable also affects the kinds of statistical analysis you can use on your data.

Experiments are always context-dependent, and a good experimental design will take into account all of the unique considerations of your study system to produce information that is both valid and relevant to your research question.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Likert scale

Research bias

  • Implicit bias
  • Framing effect
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic

Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:

  • A testable hypothesis
  • At least one independent variable that can be precisely manipulated
  • At least one dependent variable that can be precisely measured

When designing the experiment, you decide:

  • How you will manipulate the variable(s)
  • How you will control for any potential confounding variables
  • How many subjects or samples will be included in the study
  • How subjects will be assigned to treatment levels

Experimental design is essential to the internal and external validity of your experiment.

The key difference between observational studies and experimental designs is that a well-done observational study does not influence the responses of participants, while experiments do have some sort of treatment condition applied to at least some participants by random assignment .

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word “between” means that you’re comparing different conditions between groups, while the word “within” means you’re comparing different conditions within the same group.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bevans, R. (2023, June 21). Guide to Experimental Design | Overview, 5 steps & Examples. Scribbr. Retrieved September 9, 2024, from https://www.scribbr.com/methodology/experimental-design/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Other students also liked, random assignment in experiments | introduction & examples, quasi-experimental design | definition, types & examples, how to write a lab report, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

NOTIFICATIONS

Scientific modelling.

  • + Create new collection

In science, a model is a representation of an idea, an object or even a process or a system that is used to describe and explain phenomena that cannot be experienced directly. Models are central to what scientists do, both in their research as well as when communicating their explanations.

Models are a mentally visual way of linking theory with experiment, and they guide research by being simplified representations of an imagined reality that enable predictions to be developed and tested by experiment.

Why scientists use models

Models have a variety of uses – from providing a way of explaining complex data to presenting as a hypothesis. There may be more than one model proposed by scientists to explain or predict what might happen in particular circumstances. Often scientists will argue about the ‘rightness’ of their model, and in the process, the model will evolve or be rejected. Consequently, models are central to the process of knowledge-building in science and demonstrate how science knowledge is tentative.

Think about a model showing the Earth – a globe. Until 2005, globes were always an artist’s representation of what we thought the planet looked like. (In 2005, the first globe using satellite pictures from NASA was produced.) The first known globe to be made (in 150BC) was not very accurate. The globe was constructed in Greece so perhaps only showed a small amount of land in Europe, and it wouldn’t have had Australia, China or New Zealand on it! As the amount of knowledge has built up over hundreds of years, the model has improved until, by the time a globe made from real images was produced, there was no noticeable difference between the representation and the real thing.

Building a model

Scientists start with a small amount of data and build up a better and better representation of the phenomena they are explaining or using for prediction as time goes on. These days, many models are likely to be mathematical and are run on computers, rather than being a visual representation, but the principle is the same.

Using models for predicting

In some situations, models are developed by scientists to try and predict things. The best examples are climate models and climate change. Humans don’t know the full effect they are having on the planet, but we do know a lot about carbon cycles, water cycles and weather. Using this information and an understanding of how these cycles interact, scientists are trying to figure out what might happen. Models further rely on the work of scientists to collect quality data to feed into the models. To learn more about work to collate data for models, look at the Argo Project and the work being done to collect large-scale temperature and salinity data to understand what role the ocean plays in climate and climate change.

For example, they can use data to predict what the climate might be like in 20 years if we keep producing carbon dioxide at current rates – what might happen if we produce more carbon dioxide and what would happen if we produce less. The results are used to inform politicians about what could happen to the climate and what can be changed.

Another common use of models is in management of fisheries. Fishing and selling fish to export markets is an important industry for many countries including New Zealand (worth $1.4 billion dollars in 2009). However, overfishing is a real risk and can cause fishing grounds to collapse. Scientists use information about fish life cycles, breeding patterns, weather, coastal currents and habitats to predict how many fish can be taken from a particular area before the population is reduced below the point where it can’t recover.

Models can also be used when field experiments are too expensive or dangerous, such as models used to predict how fire spreads in road tunnels and how a fire might develop in a building .

How do we know if a model works?

Models are often used to make very important decisions, for example, reducing the amount of fish that can be taken from an area might send a company out of business or prevent a fisher from having a career that has been in their family for generations.

The costs associated with combating climate change are almost unimaginable, so it’s important that the models are right, but often it is a case of using the best information available to date. Models need to be continually tested to see if the data used provides useful information. A question scientists can ask of a model is: Does it fit the data that we know?

For climate change, this is a bit difficult. It might fit what we know now, but do we know enough? One way to test a climate change model is to run it backwards. Can it accurately predict what has already happened? Scientists can measure what has happened in the past, so if the model fits the data, it is thought to be a little more trustworthy. If it doesn’t fit, it’s time to do some more work.

This process of comparing model predictions with observable data is known as ‘ground-truthing’. For fisheries management, ground-truthing involves going out and taking samples of fish at different areas. If there are not as many fish in the region as the model predicts, it is time to do some more work.

Learn more about ground truthing in Satellites measure sea ice thickness . Here scientists are validating sateliite data on ice thickness in Antarctica so the data can be used to model how the Earth’s climate, sea temperature and sea levels may be changing.

Nature of science

Models have always been important in science and continue to be used to test hypotheses and predict information. Often they are not accurate because the scientists may not have all the data. It is important that scientists test their models and be willing to improve them as new data comes to light. Model-building can take time – an accurate globe took more than 2,000 years to create – hopefully, an accurate model for climate change will take significantly less time.

Useful links

An example of a scientific model on YouTube.

See our newsletters here .

Would you like to take a short survey?

This survey will open in a new tab and you can fill it out after your visit to the site.

Encyclopedia Britannica

  • History & Society
  • Science & Tech
  • Biographies
  • Animals & Nature
  • Geography & Travel
  • Arts & Culture
  • Games & Quizzes
  • On This Day
  • One Good Fact
  • New Articles
  • Lifestyles & Social Issues
  • Philosophy & Religion
  • Politics, Law & Government
  • World History
  • Health & Medicine
  • Browse Biographies
  • Birds, Reptiles & Other Vertebrates
  • Bugs, Mollusks & Other Invertebrates
  • Environment
  • Fossils & Geologic Time
  • Entertainment & Pop Culture
  • Sports & Recreation
  • Visual Arts
  • Demystified
  • Image Galleries
  • Infographics
  • Top Questions
  • Britannica Kids
  • Saving Earth
  • Space Next 50
  • Student Center

Rutherford atomic model

What is the model of the atom proposed by Ernest Rutherford?

What is the rutherford gold-foil experiment, what were the results of rutherford's experiment, what did ernest rutherford's atomic model get right and wrong, what was the impact of ernest rutherford's theory.

Blackboard inscribed with scientific formulas and calculations in physics and mathematics

Rutherford model

Our editors will review what you’ve submitted and determine whether to revise the article.

  • UC Davis - The Rutherford Scattering Experiment
  • Chemistry LibreTexts - Rutherford's Experiment- The Nuclear Model of the Atom

Rutherford atomic model

The atom , as described by Ernest Rutherford , has a tiny, massive core called the nucleus . The nucleus has a positive charge. Electrons are particles with a negative charge. Electrons orbit the nucleus. The empty space between the nucleus and the electrons takes up most of the volume of the atom.

A piece of gold foil was hit with alpha particles , which have a positive charge. Most alpha particles went right through. This showed that the gold atoms were mostly empty space. Some particles had their paths bent at large angles. A few even bounced backward. The only way this would happen was if the atom had a small, heavy region of positive charge inside it.

The previous model of the atom, the Thomson atomic model , or the “plum pudding” model, in which negatively charged electrons were like the plums in the atom’s positively charged pudding, was disproved. The Rutherford atomic model relied on classical physics. The Bohr atomic model , relying on quantum mechanics, built upon the Rutherford model to explain the orbits of electrons.

The Rutherford atomic model was correct in that the atom is mostly empty space. Most of the mass is in the nucleus, and the nucleus is positively charged. Far from the nucleus are the negatively charged electrons. But the Rutherford atomic model used classical physics and not quantum mechanics. This meant that an electron circling the nucleus would give off electromagnetic radiation . The electron would lose energy and fall into the nucleus. In the Bohr model, which used quantum theory, the electrons exist only in specific orbits and can move between these orbits.​

The gold-foil experiment showed that the atom consists of a small, massive, positively charged nucleus with the negatively charged electrons being at a great distance from the centre. Niels Bohr built upon Rutherford’s model to make his own. In Bohr’s model the orbits of the electrons were explained by quantum mechanics.

Rutherford model , description of the structure of atoms proposed (1911) by the New Zealand-born physicist Ernest Rutherford . The model described the atom as a tiny, dense, positively charged core called a nucleus, in which nearly all the mass is concentrated, around which the light, negative constituents , called electrons , circulate at some distance, much like planets revolving around the Sun .

model of experiment

The nucleus was postulated as small and dense to account for the scattering of alpha particles from thin gold foil, as observed in a series of experiments performed by undergraduate Ernest Marsden under the direction of Rutherford and German physicist Hans Geiger in 1909. A radioactive source emitting alpha particles (i.e., positively charged particles, identical to the helium atom nucleus and 7,000 times more massive than electrons) was enclosed within a protective lead shield. The radiation was focused into a narrow beam after passing through a slit in a lead screen. A thin section of gold foil was placed in front of the slit, and a screen coated with zinc sulfide to render it fluorescent served as a counter to detect alpha particles. As each alpha particle struck the fluorescent screen , it produced a burst of light called a scintillation, which was visible through a viewing microscope attached to the back of the screen. The screen itself was movable, allowing Rutherford and his associates to determine whether or not any alpha particles were being deflected by the gold foil.

atom. Orange and green illustration of protons and neutrons creating the nucleus of an atom.

Most alpha particles passed straight through the gold foil, which implied that atoms are mostly composed of open space. Some alpha particles were deflected slightly, suggesting interactions with other positively charged particles within the atom. Still other alpha particles were scattered at large angles, while a very few even bounced back toward the source. (Rutherford famously said later, “It was almost as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you.”) Only a positively charged and relatively heavy target particle, such as the proposed nucleus, could account for such strong repulsion. The negative electrons that balanced electrically the positive nuclear charge were regarded as traveling in circular orbits about the nucleus. The electrostatic force of attraction between electrons and nucleus was likened to the gravitational force of attraction between the revolving planets and the Sun. Most of this planetary atom was open space and offered no resistance to the passage of the alpha particles.

The Rutherford model supplanted the “plum-pudding” atomic model of English physicist Sir J.J. Thomson , in which the electrons were embedded in a positively charged atom like plums in a pudding. Based wholly on classical physics , the Rutherford model itself was superseded in a few years by the Bohr atomic model , which incorporated some early quantum theory . See also atomic model .

1.2 The Scientific Methods

Section learning objectives.

By the end of this section, you will be able to do the following:

  • Explain how the methods of science are used to make scientific discoveries
  • Define a scientific model and describe examples of physical and mathematical models used in physics
  • Compare and contrast hypothesis, theory, and law

Teacher Support

The learning objectives in this section will help your students master the following standards:

  • (A) know the definition of science and understand that it has limitations, as specified in subsection (b)(2) of this section;
  • (B) know that scientific hypotheses are tentative and testable statements that must be capable of being supported or not supported by observational evidence. Hypotheses of durable explanatory power which have been tested over a wide variety of conditions are incorporated into theories;
  • (C) know that scientific theories are based on natural and physical phenomena and are capable of being tested by multiple independent researchers. Unlike hypotheses, scientific theories are well-established and highly-reliable explanations, but may be subject to change as new areas of science and new technologies are developed;
  • (D) distinguish between scientific hypotheses and scientific theories.

Section Key Terms

experiment hypothesis model observation principle
scientific law scientific methods theory universal

[OL] Pre-assessment for this section could involve students sharing or writing down an anecdote about when they used the methods of science. Then, students could label their thought processes in their anecdote with the appropriate scientific methods. The class could also discuss their definitions of theory and law, both outside and within the context of science.

[OL] It should be noted and possibly mentioned that a scientist , as mentioned in this section, does not necessarily mean a trained scientist. It could be anyone using methods of science.

Scientific Methods

Scientists often plan and carry out investigations to answer questions about the universe around us. These investigations may lead to natural laws. Such laws are intrinsic to the universe, meaning that humans did not create them and cannot change them. We can only discover and understand them. Their discovery is a very human endeavor, with all the elements of mystery, imagination, struggle, triumph, and disappointment inherent in any creative effort. The cornerstone of discovering natural laws is observation. Science must describe the universe as it is, not as we imagine or wish it to be.

We all are curious to some extent. We look around, make generalizations, and try to understand what we see. For example, we look up and wonder whether one type of cloud signals an oncoming storm. As we become serious about exploring nature, we become more organized and formal in collecting and analyzing data. We attempt greater precision, perform controlled experiments (if we can), and write down ideas about how data may be organized. We then formulate models, theories, and laws based on the data we have collected, and communicate those results with others. This, in a nutshell, describes the scientific method that scientists employ to decide scientific issues on the basis of evidence from observation and experiment.

An investigation often begins with a scientist making an observation . The scientist observes a pattern or trend within the natural world. Observation may generate questions that the scientist wishes to answer. Next, the scientist may perform some research about the topic and devise a hypothesis . A hypothesis is a testable statement that describes how something in the natural world works. In essence, a hypothesis is an educated guess that explains something about an observation.

[OL] An educated guess is used throughout this section in describing a hypothesis to combat the tendency to think of a theory as an educated guess.

Scientists may test the hypothesis by performing an experiment . During an experiment, the scientist collects data that will help them learn about the phenomenon they are studying. Then the scientists analyze the results of the experiment (that is, the data), often using statistical, mathematical, and/or graphical methods. From the data analysis, they draw conclusions. They may conclude that their experiment either supports or rejects their hypothesis. If the hypothesis is supported, the scientist usually goes on to test another hypothesis related to the first. If their hypothesis is rejected, they will often then test a new and different hypothesis in their effort to learn more about whatever they are studying.

Scientific processes can be applied to many situations. Let’s say that you try to turn on your car, but it will not start. You have just made an observation! You ask yourself, "Why won’t my car start?" You can now use scientific processes to answer this question. First, you generate a hypothesis such as, "The car won’t start because it has no gasoline in the gas tank." To test this hypothesis, you put gasoline in the car and try to start it again. If the car starts, then your hypothesis is supported by the experiment. If the car does not start, then your hypothesis is rejected. You will then need to think up a new hypothesis to test such as, "My car won’t start because the fuel pump is broken." Hopefully, your investigations lead you to discover why the car won’t start and enable you to fix it.

A model is a representation of something that is often too difficult (or impossible) to study directly. Models can take the form of physical models, equations, computer programs, or simulations—computer graphics/animations. Models are tools that are especially useful in modern physics because they let us visualize phenomena that we normally cannot observe with our senses, such as very small objects or objects that move at high speeds. For example, we can understand the structure of an atom using models, without seeing an atom with our own eyes. Although images of single atoms are now possible, these images are extremely difficult to achieve and are only possible due to the success of our models. The existence of these images is a consequence rather than a source of our understanding of atoms. Models are always approximate, so they are simpler to consider than the real situation; the more complete a model is, the more complicated it must be. Models put the intangible or the extremely complex into human terms that we can visualize, discuss, and hypothesize about.

Scientific models are constructed based on the results of previous experiments. Even still, models often only describe a phenomenon partially or in a few limited situations. Some phenomena are so complex that they may be impossible to model them in their entirety, even using computers. An example is the electron cloud model of the atom in which electrons are moving around the atom’s center in distinct clouds ( Figure 1.12 ), that represent the likelihood of finding an electron in different places. This model helps us to visualize the structure of an atom. However, it does not show us exactly where an electron will be within its cloud at any one particular time.

As mentioned previously, physicists use a variety of models including equations, physical models, computer simulations, etc. For example, three-dimensional models are often commonly used in chemistry and physics to model molecules. Properties other than appearance or location are usually modelled using mathematics, where functions are used to show how these properties relate to one another. Processes such as the formation of a star or the planets, can also be modelled using computer simulations. Once a simulation is correctly programmed based on actual experimental data, the simulation can allow us to view processes that happened in the past or happen too quickly or slowly for us to observe directly. In addition, scientists can also run virtual experiments using computer-based models. In a model of planet formation, for example, the scientist could alter the amount or type of rocks present in space and see how it affects planet formation.

Scientists use models and experimental results to construct explanations of observations or design solutions to problems. For example, one way to make a car more fuel efficient is to reduce the friction or drag caused by air flowing around the moving car. This can be done by designing the body shape of the car to be more aerodynamic, such as by using rounded corners instead of sharp ones. Engineers can then construct physical models of the car body, place them in a wind tunnel, and examine the flow of air around the model. This can also be done mathematically in a computer simulation. The air flow pattern can be analyzed for regions smooth air flow and for eddies that indicate drag. The model of the car body may have to be altered slightly to produce the smoothest pattern of air flow (i.e., the least drag). The pattern with the least drag may be the solution to increasing fuel efficiency of the car. This solution might then be incorporated into the car design.

Using Models and the Scientific Processes

Be sure to secure loose items before opening the window or door.

In this activity, you will learn about scientific models by making a model of how air flows through your classroom or a room in your house.

  • One room with at least one window or door that can be opened
  • Work with a group of four, as directed by your teacher. Close all of the windows and doors in the room you are working in. Your teacher may assign you a specific window or door to study.
  • Before opening any windows or doors, draw a to-scale diagram of your room. First, measure the length and width of your room using the tape measure. Then, transform the measurement using a scale that could fit on your paper, such as 5 centimeters = 1 meter.
  • Your teacher will assign you a specific window or door to study air flow. On your diagram, add arrows showing your hypothesis (before opening any windows or doors) of how air will flow through the room when your assigned window or door is opened. Use pencil so that you can easily make changes to your diagram.
  • On your diagram, mark four locations where you would like to test air flow in your room. To test for airflow, hold a strip of single ply tissue paper between the thumb and index finger. Note the direction that the paper moves when exposed to the airflow. Then, for each location, predict which way the paper will move if your air flow diagram is correct.
  • Now, each member of your group will stand in one of the four selected areas. Each member will test the airflow Agree upon an approximate height at which everyone will hold their papers.
  • When you teacher tells you to, open your assigned window and/or door. Each person should note the direction that their paper points immediately after the window or door was opened. Record your results on your diagram.
  • Did the airflow test data support or refute the hypothetical model of air flow shown in your diagram? Why or why not? Correct your model based on your experimental evidence.
  • With your group, discuss how accurate your model is. What limitations did it have? Write down the limitations that your group agreed upon.
  • Yes, you could use your model to predict air flow through a new window. The earlier experiment of air flow would help you model the system more accurately.
  • Yes, you could use your model to predict air flow through a new window. The earlier experiment of air flow is not useful for modeling the new system.
  • No, you cannot model a system to predict the air flow through a new window. The earlier experiment of air flow would help you model the system more accurately.
  • No, you cannot model a system to predict the air flow through a new window. The earlier experiment of air flow is not useful for modeling the new system.

This Snap Lab! has students construct a model of how air flows in their classroom. Each group of four students will create a model of air flow in their classroom using a scale drawing of the room. Then, the groups will test the validity of their model by placing weathervanes that they have constructed around the room and opening a window or door. By observing the weather vanes, students will see how air actually flows through the room from a specific window or door. Students will then correct their model based on their experimental evidence. The following material list is given per group:

  • One room with at least one window or door that can be opened (An optimal configuration would be one window or door per group.)
  • Several pieces of construction paper (at least four per group)
  • Strips of single ply tissue paper
  • One tape measure (long enough to measure the dimensions of the room)
  • Group size can vary depending on the number of windows/doors available and the number of students in the class.
  • The room dimensions could be provided by the teacher. Also, students may need a brief introduction in how to make a drawing to scale.
  • This is another opportunity to discuss controlled experiments in terms of why the students should hold the strips of tissue paper at the same height and in the same way. One student could also serve as a control and stand far away from the window/door or in another area that will not receive air flow from the window/door.
  • You will probably need to coordinate this when multiple windows or doors are used. Only one window or door should be opened at a time for best results. Between openings, allow a short period (5 minutes) when all windows and doors are closed, if possible.

Answers to the Grasp Check will vary, but the air flow in the new window or door should be based on what the students observed in their experiment.

Scientific Laws and Theories

A scientific law is a description of a pattern in nature that is true in all circumstances that have been studied. That is, physical laws are meant to be universal , meaning that they apply throughout the known universe. Laws are often also concise, whereas theories are more complicated. A law can be expressed in the form of a single sentence or mathematical equation. For example, Newton’s second law of motion , which relates the motion of an object to the force applied ( F ), the mass of the object ( m ), and the object’s acceleration ( a ), is simply stated using the equation

Scientific ideas and explanations that are true in many, but not all situations in the universe are usually called principles . An example is Pascal’s principle , which explains properties of liquids, but not solids or gases. However, the distinction between laws and principles is sometimes not carefully made in science.

A theory is an explanation for patterns in nature that is supported by much scientific evidence and verified multiple times by multiple researchers. While many people confuse theories with educated guesses or hypotheses, theories have withstood more rigorous testing and verification than hypotheses.

[OL] Explain to students that in informal, everyday English the word theory can be used to describe an idea that is possibly true but that has not been proven to be true. This use of the word theory often leads people to think that scientific theories are nothing more than educated guesses. This is not just a misconception among students, but among the general public as well.

As a closing idea about scientific processes, we want to point out that scientific laws and theories, even those that have been supported by experiments for centuries, can still be changed by new discoveries. This is especially true when new technologies emerge that allow us to observe things that were formerly unobservable. Imagine how viewing previously invisible objects with a microscope or viewing Earth for the first time from space may have instantly changed our scientific theories and laws! What discoveries still await us in the future? The constant retesting and perfecting of our scientific laws and theories allows our knowledge of nature to progress. For this reason, many scientists are reluctant to say that their studies prove anything. By saying support instead of prove , it keeps the door open for future discoveries, even if they won’t occur for centuries or even millennia.

[OL] With regard to scientists avoiding using the word prove , the general public knows that science has proven certain things such as that the heart pumps blood and the Earth is round. However, scientists should shy away from using prove because it is impossible to test every single instance and every set of conditions in a system to absolutely prove anything. Using support or similar terminology leaves the door open for further discovery.

Check Your Understanding

  • Models are simpler to analyze.
  • Models give more accurate results.
  • Models provide more reliable predictions.
  • Models do not require any computer calculations.
  • They are the same.
  • A hypothesis has been thoroughly tested and found to be true.
  • A hypothesis is a tentative assumption based on what is already known.
  • A hypothesis is a broad explanation firmly supported by evidence.
  • A scientific model is a representation of something that can be easily studied directly. It is useful for studying things that can be easily analyzed by humans.
  • A scientific model is a representation of something that is often too difficult to study directly. It is useful for studying a complex system or systems that humans cannot observe directly.
  • A scientific model is a representation of scientific equipment. It is useful for studying working principles of scientific equipment.
  • A scientific model is a representation of a laboratory where experiments are performed. It is useful for studying requirements needed inside the laboratory.
  • The hypothesis must be validated by scientific experiments.
  • The hypothesis must not include any physical quantity.
  • The hypothesis must be a short and concise statement.
  • The hypothesis must apply to all the situations in the universe.
  • A scientific theory is an explanation of natural phenomena that is supported by evidence.
  • A scientific theory is an explanation of natural phenomena without the support of evidence.
  • A scientific theory is an educated guess about the natural phenomena occurring in nature.
  • A scientific theory is an uneducated guess about natural phenomena occurring in nature.
  • A hypothesis is an explanation of the natural world with experimental support, while a scientific theory is an educated guess about a natural phenomenon.
  • A hypothesis is an educated guess about natural phenomenon, while a scientific theory is an explanation of natural world with experimental support.
  • A hypothesis is experimental evidence of a natural phenomenon, while a scientific theory is an explanation of the natural world with experimental support.
  • A hypothesis is an explanation of the natural world with experimental support, while a scientific theory is experimental evidence of a natural phenomenon.

Use the Check Your Understanding questions to assess students’ achievement of the section’s learning objectives. If students are struggling with a specific objective, the Check Your Understanding will help identify which objective and direct students to the relevant content.

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute Texas Education Agency (TEA). The original material is available at: https://www.texasgateway.org/book/tea-physics . Changes were made to the original material, including updates to art, structure, and other content updates.

Access for free at https://openstax.org/books/physics/pages/1-introduction
  • Authors: Paul Peter Urone, Roger Hinrichs
  • Publisher/website: OpenStax
  • Book title: Physics
  • Publication date: Mar 26, 2020
  • Location: Houston, Texas
  • Book URL: https://openstax.org/books/physics/pages/1-introduction
  • Section URL: https://openstax.org/books/physics/pages/1-2-the-scientific-methods

© Jun 7, 2024 Texas Education Agency (TEA). The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Models in Science

Models are of central importance in many scientific contexts. The centrality of models such as inflationary models in cosmology, general-circulation models of the global climate, the double-helix model of DNA, evolutionary models in biology, agent-based models in the social sciences, and general-equilibrium models of markets in their respective domains is a case in point (the Other Internet Resources section at the end of this entry contains links to online resources that discuss these models). Scientists spend significant amounts of time building, testing, comparing, and revising models, and much journal space is dedicated to interpreting and discussing the implications of models.

As a result, models have attracted philosophers’ attention and there are now sizable bodies of literature about various aspects of scientific modeling. A tangible result of philosophical engagement with models is a proliferation of model types recognized in the philosophical literature. Probing models , phenomenological models , computational models , developmental models , explanatory models , impoverished models , testing models , idealized models , theoretical models , scale models , heuristic models , caricature models , exploratory models , didactic models , fantasy models , minimal models , toy models , imaginary models , mathematical models , mechanistic models , substitute models , iconic models , formal models , analogue models , and instrumental models are but some of the notions that are used to categorize models. While at first glance this abundance is overwhelming, it can be brought under control by recognizing that these notions pertain to different problems that arise in connection with models. Models raise questions in semantics (how, if at all, do models represent?), ontology (what kind of things are models?), epistemology (how do we learn and explain with models?), and, of course, in other domains within philosophy of science.

1. Semantics: Models and Representation

2.1 physical objects, 2.2 fictional objects and abstract objects, 2.3 set-theoretic structures, 2.4 descriptions and equations, 3.1 learning about models, 3.2 learning about target systems, 3.3 explaining with models, 3.4 understanding with models, 3.5 other cognitive functions, 4.1 models as subsidiaries to theory, 4.2 models as independent from theories, 5.1 models, realism, and laws of nature, 5.2 models and reductionism, other internet resources, related entries.

Many scientific models are representational models: they represent a selected part or aspect of the world, which is the model’s target system. Standard examples are the billiard ball model of a gas, the Bohr model of the atom, the Lotka–Volterra model of predator–prey interaction, the Mundell–Fleming model of an open economy, and the scale model of a bridge.

This raises the question what it means for a model to represent a target system. This problem is rather involved and decomposes into various subproblems. For an in-depth discussion of the issue of representation, see the entry on scientific representation . At this point, rather than addressing the issue of what it means for a model to represent, we focus on a number of different kinds of representation that play important roles in the practice of model-based science, namely scale models, analogical models, idealized models, toy models, minimal models, phenomenological models, exploratory models, and models of data. These categories are not mutually exclusive, and a given model can fall into several categories at once.

Scale models . Some models are down-sized or enlarged copies of their target systems (Black 1962). A typical example is a small wooden car that is put into a wind tunnel to explore the actual car’s aerodynamic properties. The intuition is that a scale model is a naturalistic replica or a truthful mirror image of the target; for this reason, scale models are sometimes also referred to as “true models” (Achinstein 1968: Ch. 7). However, there is no such thing as a perfectly faithful scale model; faithfulness is always restricted to some respects. The wooden scale model of the car provides a faithful portrayal of the car’s shape but not of its material. And even in the respects in which a model is a faithful representation, the relation between model-properties and target-properties is usually not straightforward. When engineers use, say, a 1:100 scale model of a ship to investigate the resistance that an actual ship experiences when moving through the water, they cannot simply measure the resistance the model experiences and then multiply it with the scale. In fact, the resistance faced by the model does not translate into the resistance faced by the actual ship in a straightforward manner (that is, one cannot simply scale the water resistance with the scale of the model: the real ship need not have one hundred times the water resistance of its 1:100 model). The two quantities stand in a complicated nonlinear relation with each other, and the exact form of that relation is often highly nontrivial and emerges as the result of a thoroughgoing study of the situation (Sterrett 2006, forthcoming; Pincock forthcoming).

Analogical models . Standard examples of analogical models include the billiard ball model of a gas, the hydraulic model of an economic system, and the dumb hole model of a black hole. At the most basic level, two things are analogous if there are certain relevant similarities between them. In a classic text, Hesse (1963) distinguishes different types of analogies according to the kinds of similarity relations into which two objects enter. A simple type of analogy is one that is based on shared properties. There is an analogy between the earth and the moon based on the fact that both are large, solid, opaque, spherical bodies that receive heat and light from the sun, revolve around their axes, and gravitate towards other bodies. But sameness of properties is not a necessary condition. An analogy between two objects can also be based on relevant similarities between their properties. In this more liberal sense, we can say that there is an analogy between sound and light because echoes are similar to reflections, loudness to brightness, pitch to color, detectability by the ear to detectability by the eye, and so on.

Analogies can also be based on the sameness or resemblance of relations between parts of two systems rather than on their monadic properties. It is in this sense that the relation of a father to his children is asserted to be analogous to the relation of the state to its citizens. The analogies mentioned so far have been what Hesse calls “material analogies”. We obtain a more formal notion of analogy when we abstract from the concrete features of the systems and only focus on their formal set-up. What the analogue model then shares with its target is not a set of features, but the same pattern of abstract relationships (i.e., the same structure, where structure is understood in a formal sense). This notion of analogy is closely related to what Hesse calls “formal analogy”. Two items are related by formal analogy if they are both interpretations of the same formal calculus. For instance, there is a formal analogy between a swinging pendulum and an oscillating electric circuit because they are both described by the same mathematical equation.

A further important distinction due to Hesse is the one between positive, negative, and neutral analogies. The positive analogy between two items consists in the properties or relations they share (both gas molecules and billiard balls have mass); the negative analogy consists in the properties they do not share (billiard balls are colored, gas molecules are not); the neutral analogy comprises the properties of which it is not known (yet) whether they belong to the positive or the negative analogy (do billiard balls and molecules have the same cross section in scattering processes?). Neutral analogies play an important role in scientific research because they give rise to questions and suggest new hypotheses. For this reason several authors have emphasized the heuristic role that analogies play in theory and model construction, as well as in creative thought (Bailer-Jones and Bailer-Jones 2002; Bailer-Jones 2009: Ch. 3; Hesse 1974; Holyoak and Thagard 1995; Kroes 1989; Psillos 1995; and the essays collected in Helman 1988). See also the entry on analogy and analogical reasoning .

It has also been discussed whether using analogical models can in some cases be confirmatory in a Bayesian sense. Hesse (1974: 208–219) argues that this is possible if the analogy is a material analogy. Bartha (2010, 2013 [2019]) disagrees and argues that analogical models cannot be confirmatory in a Bayesian sense because the information encapsulated in an analogical model is part of the relevant background knowledge, which has the consequence that the posterior probability of a hypothesis about a target system cannot change as a result of observing the analogy. Analogical models can therefore only establish the plausibility of a conclusion in the sense of justifying a non-negligible prior probability assignment (Bartha 2010: §8.5).

More recently, these questions have been discussed in the context of so-called analogue experiments, which promise to provide knowledge about an experimentally inaccessible target system (e.g., a black hole) by manipulating another system, the source system (e.g., a Bose–Einstein condensate). Dardashti, Thébault, and Winsberg (2017) and Dardashti, Hartmann et al. (2019) have argued that, given certain conditions, an analogue simulation of one system by another system can confirm claims about the target system (e.g., that black holes emit Hawking radiation). See Crowther et al. (forthcoming) for a critical discussion, and also the entry on computer simulations in science .

Idealized models . Idealized models are models that involve a deliberate simplification or distortion of something complicated with the objective of making it more tractable or understandable. Frictionless planes, point masses, completely isolated systems, omniscient and fully rational agents, and markets in perfect equilibrium are well-known examples. Idealizations are a crucial means for science to cope with systems that are too difficult to study in their full complexity (Potochnik 2017).

Philosophical debates over idealization have focused on two general kinds of idealizations: so-called Aristotelian and Galilean idealizations. Aristotelian idealization amounts to “stripping away”, in our imagination, all properties from a concrete object that we believe are not relevant to the problem at hand. There is disagreement on how this is done. Jones (2005) and Godfrey-Smith (2009) offer an analysis of abstraction in terms of truth: while an abstraction remains silent about certain features or aspects of the system, it does not say anything false and still offers a true (albeit restricted) description. This allows scientists to focus on a limited set of properties in isolation. An example is a classical-mechanics model of the planetary system, which describes the position of an object as a function of time and disregards all other properties of planets. Cartwright (1989: Ch. 5), Musgrave (1981), who uses the term “negligibility assumptions”, and Mäki (1994), who speaks of the “method of isolation”, allow abstractions to say something false, for instance by neglecting a causally relevant factor.

Galilean idealizations are ones that involve deliberate distortions: physicists build models consisting of point masses moving on frictionless planes; economists assume that agents are omniscient; biologists study isolated populations; and so on. Using simplifications of this sort whenever a situation is too difficult to tackle was characteristic of Galileo’s approach to science. For this reason it is common to refer to ‘distortive’ idealizations of this kind as “Galilean idealizations” (McMullin 1985). An example for such an idealization is a model of motion on an ice rink that assumes the ice to be frictionless, when, in reality, it has low but non-zero friction.

Galilean idealizations are sometimes characterized as controlled idealizations, i.e., as ones that allow for de-idealization by successive removal of the distorting assumptions (McMullin 1985; Weisberg 2007). Thus construed, Galilean idealizations don’t cover all distortive idealizations. Batterman (2002, 2011) and Rice (2015, 2019) discuss distortive idealizations that are ineliminable in that they cannot be removed from the model without dismantling the model altogether.

What does a model involving distortions tell us about reality? Laymon (1991) formulated a theory which understands idealizations as ideal limits: imagine a series of refinements of the actual situation which approach the postulated limit, and then require that the closer the properties of a system come to the ideal limit, the closer its behavior has to come to the behavior of the system at the limit (monotonicity). If this is the case, then scientists can study the system at the limit and carry over conclusions from that system to systems distant from the limit. But these conditions need not always hold. In fact, it can happen that the limiting system does not approach the system at the limit. If this happens, we are faced with a singular limit (Berry 2002). In such cases the system at the limit can exhibit behavior that is different from the behavior of systems distant from the limit. Limits of this kind appear in a number of contexts, most notably in the theory of phase transitions in statistical mechanics. There is, however, no agreement over the correct interpretation of such limits. Batterman (2002, 2011) sees them as indicative of emergent phenomena, while Butterfield (2011a,b) sees them as compatible with reduction (see also the entries on intertheory relations in physics and scientific reduction ).

Galilean and Aristotelian idealizations are not mutually exclusive, and many models exhibit both in that they take into account a narrow set of properties and distort them. Consider again the classical-mechanics model of the planetary system: the model only takes a narrow set of properties into account and distorts them, for instance by describing planets as ideal spheres with a rotation-symmetric mass distribution.

A concept that is closely related to idealization is approximation. In a broad sense, A can be called an approximation of B if A is somehow close to B . This, however, is too broad because it makes room for any likeness to qualify as an approximation. Rueger and Sharp (1998) limit approximations to quantitative closeness, and Portides (2007) frames it as an essentially mathematical concept. On that notion A is an approximation of B iff A is close to B in a specifiable mathematical sense, where the relevant sense of “close” will be given by the context. An example is the approximation of one curve with another one, which can be achieved by expanding a function into a power series and only keeping the first two or three terms. In different situations we approximate an equation with another one by letting a control parameter tend towards zero (Redhead 1980). This raises the question of how approximations are different from idealizations, which can also involve mathematical closeness. Norton (2012) sees the distinction between the two as referential: an approximation is an inexact description of the target while an idealization introduces a secondary system (real or fictitious) which stands for the target system (while being distinct from it). If we say that the period of the pendulum on the wall is roughly two seconds, then this is an approximation; if we reason about the real pendulum by assuming that the pendulum bob is a point mass and that the string is massless (i.e., if we assume that the pendulum is a so-called ideal pendulum), then we use an idealization. Separating idealizations and approximations in this way does not imply that there cannot be interesting relations between the two. For instance, an approximation can be justified by pointing out that it is the mathematical expression of an acceptable idealization (e.g., when we neglect a dissipative term in an equation of motion because we make the idealizing assumption that the system is frictionless).

Toy models . Toy models are extremely simplified and strongly distorted renderings of their targets, and often only represent a small number of causal or explanatory factors (Hartmann 1995; Reutlinger et al. 2018; Nguyen forthcoming). Typical examples are the Lotka–Volterra model in population ecology (Weisberg 2013) and the Schelling model of segregation in the social sciences (Sugden 2000). Toy models usually do not perform well in terms of prediction and empirical adequacy, and they seem to serve other epistemic goals (more on these in Section 3 ). This raises the question whether they should be regarded as representational at all (Luczak 2017).

Some toy models are characterized as “caricatures” (Gibbard and Varian 1978; Batterman and Rice 2014). Caricature models isolate a small number of salient characteristics of a system and distort them into an extreme case. A classic example is Akerlof’s (1970) model of the car market (“the market for lemons”), which explains the difference in price between new and used cars solely in terms of asymmetric information, thereby disregarding all other factors that may influence the prices of cars (see also Sugden 2000). However, it is controversial whether such highly idealized models can still be regarded as informative representations of their target systems. For a discussion of caricature models, in particular in economics, see Reiss (2006).

Minimal models . Minimal models are closely related to toy models in that they are also highly simplified. They are so simplified that some argue that they are non-representational: they lack any similarity, isomorphism, or resemblance relation to the world (Batterman and Rice 2014). It has been argued that many economic models are of this kind (Grüne-Yanoff 2009). Minimal economic models are also unconstrained by natural laws, and do not isolate any real factors ( ibid .). And yet, minimal models help us to learn something about the world in the sense that they function as surrogates for a real system: scientists can study the model to learn something about the target. It is, however, controversial whether minimal models can assist scientists in learning something about the world if they do not represent anything (Fumagalli 2016). Minimal models that purportedly lack any similarity or representation are also used in different parts of physics to explain the macro-scale behavior of various systems whose micro-scale behavior is extremely diverse (Batterman and Rice 2014; Rice 2018, 2019; Shech 2018). Typical examples are the features of phase transitions and the flow of fluids. Proponents of minimal models argue that what provides an explanation of the macro-scale behavior of a system in these cases is not a feature that system and model have in common, but the fact that the system and the model belong to the same universality class (a class of models that exhibit the same limiting behavior even though they show very different behavior at finite scales). It is, however, controversial whether explanations of this kind are possible without reference to at least some common features (Lange 2015; Reutlinger 2017).

Phenomenological models . Phenomenological models have been defined in different, although related, ways. A common definition takes them to be models that only represent observable properties of their targets and refrain from postulating hidden mechanisms and the like (Bokulich 2011). Another approach, due to McMullin (1968), defines phenomenological models as models that are independent of theories. This, however, seems to be too strong. Many phenomenological models, while failing to be derivable from a theory, incorporate principles and laws associated with theories. The liquid-drop model of the atomic nucleus, for instance, portrays the nucleus as a liquid drop and describes it as having several properties (surface tension and charge, among others) originating in different theories (hydrodynamics and electrodynamics, respectively). Certain aspects of these theories—although usually not the full theories—are then used to determine both the static and dynamical properties of the nucleus. Finally, it is tempting to identify phenomenological models with models of a phenomenon . Here, “phenomenon” is an umbrella term covering all relatively stable and general features of the world that are interesting from a scientific point of view. The weakening of sound as a function of the distance to the source, the decay of alpha particles, the chemical reactions that take place when a piece of limestone dissolves in an acid, the growth of a population of rabbits, and the dependence of house prices on the base rate of the Federal Reserve are phenomena in this sense. For further discussion, see Bailer-Jones (2009: Ch. 7), Bogen and Woodward (1988), and the entry on theory and observation in science .

Exploratory models . Exploratory models are models which are not proposed in the first place to learn something about a specific target system or a particular experimentally established phenomenon. Exploratory models function as the starting point of further explorations in which the model is modified and refined. Gelfert (2016) points out that exploratory models can provide proofs-of-principle and suggest how-possibly explanations (2016: Ch. 4). As an example, Gelfert mentions early models in theoretical ecology, such as the Lotka–Volterra model of predator–prey interaction, which mimic the qualitative behavior of speed-up and slow-down in population growth in an environment with limited resources (2016: 80). Such models do not give an accurate account of the behavior of any actual population, but they provide the starting point for the development of more realistic models. Massimi (2019) notes that exploratory models provide modal knowledge. Fisher (2006) sees these models as tools for the examination of the features of a given theory.

Models of data. A model of data (sometimes also “data model”) is a corrected, rectified, regimented, and in many instances idealized version of the data we gain from immediate observation, the so-called raw data (Suppes 1962). Characteristically, one first eliminates errors (e.g., removes points from the record that are due to faulty observation) and then presents the data in a “neat” way, for instance by drawing a smooth curve through a set of points. These two steps are commonly referred to as “data reduction” and “curve fitting”. When we investigate, for instance, the trajectory of a certain planet, we first eliminate points that are fallacious from the observation records and then fit a smooth curve to the remaining ones. Models of data play a crucial role in confirming theories because it is the model of data, and not the often messy and complex raw data, that theories are tested against.

The construction of a model of data can be extremely complicated. It requires sophisticated statistical techniques and raises serious methodological as well as philosophical questions. How do we decide which points on the record need to be removed? And given a clean set of data, what curve do we fit to it? The first question has been dealt with mainly within the context of the philosophy of experiment (see, for instance, Galison 1997 and Staley 2004). At the heart of the latter question lies the so-called curve-fitting problem, which is that the data themselves dictate neither the form of the fitted curve nor what statistical techniques scientists should use to construct a curve. The choice and rationalization of statistical techniques is the subject matter of the philosophy of statistics, and we refer the reader to the entry Philosophy of Statistics and to Bandyopadhyay and Forster (2011) for a discussion of these issues. Further discussions of models of data can be found in Bailer-Jones (2009: Ch. 7), Brewer and Chinn (1994), Harris (2003), Hartmann (1995), Laymon (1982), Mayo (1996, 2018), and Suppes (2007).

The gathering, processing, dissemination, analysis, interpretation, and storage of data raise many important questions beyond the relatively narrow issues pertaining to models of data. Leonelli (2016, 2019) investigates the status of data in science, argues that data should be defined not by their provenance but by their evidential function, and studies how data travel between different contexts.

2. Ontology: What Are Models?

What are models? That is, what kind of object are scientists dealing with when they work with a model? A number of authors have voiced skepticism that this question has a meaningful answer, because models do not belong to a distinctive ontological category and anything can be a model (Callender and Cohen 2006; Giere 2010; Suárez 2004; Swoyer 1991; Teller 2001). Contessa (2010) replies that this is a non sequitur . Even if, from an ontological point of view, anything can be a model and the class of things that are referred to as models contains a heterogeneous collection of different things, it does not follow that it is either impossible or pointless to develop an ontology of models. This is because even if not all models are of a particular ontological kind, one can nevertheless ask to what ontological kinds the things that are de facto used as models belong. There may be several such kinds and each kind can be analyzed in its own right. What sort of objects scientists use as models has important repercussions for how models perform relevant functions such as representation and explanation, and hence this issue cannot be dismissed as “just sociology”.

The objects that commonly serve as models indeed belong to different ontological kinds: physical objects, fictional objects, abstract objects, set-theoretic structures, descriptions, equations, or combinations of some of these, are frequently referred to as models, and some models may fall into yet other classes of things. Following Contessa’s advice, the aim then is to develop an ontology for each of these. Those with an interest in ontology may see this as a goal in its own right. It pays noting, however, that the question has reverberations beyond ontology and bears on how one understands the semantics and the epistemology of models.

Some models are physical objects. Such models are commonly referred to as “material models”. Standard examples of models of this kind are scale models of objects like bridges and ships (see Section 1 ), Watson and Crick’s metal model of DNA (Schaffner 1969), Phillips and Newlyn’s hydraulic model of an economy (Morgan and Boumans 2004), the US Army Corps of Engineers’ model of the San Francisco Bay (Weisberg 2013), Kendrew’s plasticine model of myoglobin (Frigg and Nguyen 2016), and model organisms in the life sciences (Leonelli and Ankeny 2012; Leonelli 2010; Levy and Currie 2015). All these are material objects that serve as models. Material models do not give rise to ontological difficulties over and above the well-known problems in connection with objects that metaphysicians deal with, for instance concerning the nature of properties, the identity of objects, parts and wholes, and so on.

However, many models are not material models. The Bohr model of the atom, a frictionless pendulum, or an isolated population, for instance, are in the scientist’s mind rather than in the laboratory and they do not have to be physically realized and experimented upon to serve as models. These “non-physical” models raise serious ontological questions, and how they are best analyzed is debated controversially. In the remainder of this section we review some of the suggestions that have attracted attention in the recent literature on models.

What has become known as the fiction view of models sees models as akin to the imagined objects of literary fiction—that is, as akin to fictional characters like Sherlock Holmes or fictional places like Middle Earth (Godfrey-Smith 2007). So when Bohr introduced his model of the atom he introduced a fictional object of the same kind as the object Conan Doyle introduced when he invented Sherlock Holmes. This view squares well with scientific practice, where scientists often talk about models as if they were objects and often take themselves to be describing imaginary atoms, populations, or economies. It also squares well with philosophical views that see the construction and manipulation of models as essential aspects of scientific investigation (Morgan 1999), even if models are not material objects, because these practices seem to be directed toward some kind of object.

What philosophical questions does this move solve? Fictional discourse and fictional entities face well-known philosophical questions, and one may well argue that simply likening models to fictions amounts to explaining obscurum per obscurius (for a discussion of these questions, see the entry on fictional entities ). One way to counter this objection and to motivate the fiction view of models is to point to the view’s heuristic power. In this vein Frigg (2010b) identifies five specific issues that an ontology of models has to address and then notes that these issues arise in very similar ways in the discussion about fiction (the issues are the identity conditions, property attribution, the semantics of comparative statements, truth conditions, and the epistemology of imagined objects). Likening models to fiction then has heuristic value because there is a rich literature on fiction that offers a number of solutions to these issues.

Only a small portion of the options available in the extensive literature on fictions have actually been explored in the context of scientific models. Contessa (2010) formulates what he calls the “dualist account”, according to which a model is an abstract object that stands for a possible concrete object. The Rutherford model of the atom, for instance, is an abstract object that acts as a stand-in for one of the possible systems that contain an electron orbiting around a nucleus in a well-defined orbit. Barberousse and Ludwig (2009) and Frigg (2010b) take a different route and develop an account of models as fictions based on Walton’s (1990) pretense theory of fiction. According to this view the sentences of a passage of text introducing a model should be seen as a prop in a game of make-believe, and the model is the product of an act of pretense. This is an antirealist position in that it takes talk of model “objects” to be figures of speech because ultimately there are no model objects—models only live in scientists’ imaginations. Salis (forthcoming) reformulates this view to become what she calls “the new fiction view of models”. The core difference lies in the fact that what is considered as the model are the model descriptions and their content rather than the imaginings that they prescribe. This is a realist view of models, because descriptions exist.

The fiction view is not without critics. Giere (2009), Magnani (2012), Pincock (2012), Portides (2014), and Teller (2009) reject the fiction approach and argue, in different ways, that models should not be regarded as fictions. Weisberg (2013) argues for a middle position which sees fictions as playing a heuristic role but denies that they should be regarded as forming part of a scientific model. The common core of these criticisms is that the fiction view misconstrues the epistemic standing of models. To call something a fiction, so the charge goes, is tantamount to saying that it is false, and it is unjustified to call an entire model a fiction—and thereby claim that it fails to capture how the world is—just because the model involves certain false assumptions or fictional elements. In other words, a representation isn’t automatically counted as fiction just because it has some inaccuracies. Proponents of the fiction view agree with this point but deny that the notion of fiction should be analyzed in terms of falsity. What makes a work a fiction is not its falsity (or some ratio of false to true claims): neither is everything that is said in a novel untrue (Tolstoy’s War and Peace contains many true statements about Napoleon’s Franco-Russian War), nor does every text containing false claims qualify as fiction (false news reports are just that, they are not fictions). The defining feature of a fiction is that readers are supposed to imagine the events and characters described, not that they are false (Frigg 2010a; Salis forthcoming).

Giere (1988) advocated the view that “non-physical” models are abstract entities. However, there is little agreement on the nature of abstract objects, and Hale (1988: 86–87) lists no less than twelve different possible characterizations (for a review of the available options, see the entry on abstract objects ). In recent publications, Thomasson (2020) and Thomson-Jones (2020) develop what they call an “artifactualist view” of models, which is based on Thomasson’s (1999) theory of abstract artifacts. This view agrees with the pretense theory that the content of text that introduces a fictional character or a model should be understood as occurring in pretense, but at the same time insists that in producing such descriptions authors create abstract cultural artifacts that then exist independently of either the author or the readers. Artifactualism agrees with Platonism that abstract objects exist, but insists, contra Platonism, that abstract objects are brought into existence through a creative act and are not eternal. This allows the artifactualist to preserve the advantages of pretense theory while at the same time holding the realist view that fictional characters and models actually exist.

An influential point of view takes models to be set-theoretic structures. This position can be traced back to Suppes (1960) and is now, with slight variants, held by most proponents of the so-called semantic view of theories (for a discussion of this view, see the entry on the structure of scientific theories ). There are differences between the versions of the semantic view, but with the exception of Giere (1988) all versions agree that models are structures of one sort or another (Da Costa and French 2000).

This view of models has been criticized on various grounds. One pervasive criticism is that many types of models that play an important role in science are not structures and cannot be accommodated within the structuralist view of models, which can neither account for how these models are constructed nor for how they work in the context of investigation (Cartwright 1999; Downes 1992; Morrison 1999). Examples for such models are interpretative models and mediating models, discussed later in Section 4.2 . Another charge held against the set-theoretic approach is that set-theoretic structures by themselves cannot be representational models—at least if that requires them to share some structure with the target—because the ascription of a structure to a target system which forms part of the physical world relies on a substantive (non-structural) description of the target, which goes beyond what the structuralist approach can afford (Nguyen and Frigg forthcoming).

A time-honored position has it that a model is a stylized description of a target system. It has been argued that this is what scientists display in papers and textbooks when they present a model (Achinstein 1968; Black 1962). This view has not been subject to explicit criticism. However, some of the criticisms that have been marshaled against the so-called syntactic view of theories equally threaten a linguistic understanding of models (for a discussion of this view, see the entry on the structure of scientific theories ). First, a standard criticism of the syntactic view is that by associating a theory with a particular formulation, the view misconstrues theory identity because any change in the formulation results in a new theory (Suppe 2000). A view that associates models with descriptions would seem to be open to the same criticism. Second, models have different properties than descriptions: the Newtonian model of the solar system consists of orbiting spheres, but it makes no sense to say this about its description. Conversely, descriptions have properties that models do not have: a description can be written in English and consist of 517 words, but the same cannot be said of a model. One way around these difficulties is to associate the model with the content of a description rather than with the description itself. For a discussion of a position on models that builds on the content of a description, see Salis (forthcoming).

A contemporary version of descriptivism is Levy’s (2012, 2015) and Toon’s (2012) so-called direct-representation view. This view shares with the fiction view of models ( Section 2.2 ) the reliance on Walton’s pretense theory, but uses it in a different way. The main difference is that the views discussed earlier see modeling as introducing a vehicle of representation, the model, that is distinct from the target, and they see the problem as elucidating what kind of thing the model is. On the direct-representation view there are no models distinct from the target; there are only model-descriptions and targets, with no models in-between them. Modeling, on this view, consists in providing an imaginative description of real things. A model-description prescribes imaginings about the real system; the ideal pendulum, for instance, prescribes model-users to imagine the real spring as perfectly elastic and the bob as a point mass. This approach avoids the above problems because the identity conditions for models are given by the conditions for games of make-believe (and not by the syntax of a description) and property ascriptions take place in pretense. There are, however, questions about how this account deals with models that have no target (like models of the ether or four-sex populations), and about how models thus understood deal with idealizations. For a discussion of these points, see Frigg and Nguyen (2016), Poznic (2016), and Salis (forthcoming).

A closely related approach sees models as equations. This is a version of the view that models are descriptions, because equations are syntactic items that describe a mathematical structure. The issues that this view faces are similar to the ones we have already encountered: First, one can describe the same situation using different kinds of coordinates and as a result obtain different equations but without thereby also obtaining a different model. Second, the model and the equation have different properties. A pendulum contains a massless string, but the equation describing its motion does not; and an equation may be inhomogeneous, but the system it describes is not. It is an open question whether these issues can be avoided by appeal to a pretense account.

3. Epistemology: The Cognitive Functions of Models

One of the main reasons why models play such an important role in science is that they perform a number of cognitive functions. For example, models are vehicles for learning about the world. Significant parts of scientific investigation are carried out on models rather than on reality itself because by studying a model we can discover features of, and ascertain facts about, the system the model stands for: models allow for “surrogative reasoning” (Swoyer 1991). For instance, we study the nature of the hydrogen atom, the dynamics of a population, or the behavior of a polymer by studying their respective models. This cognitive function of models has been widely acknowledged in the literature, and some even suggest that models give rise to a new style of reasoning, “model-based reasoning”, according to which “inferences are made by means of creating models and manipulating, adapting, and evaluating them” (Nersessian 2010: 12; see also Magnani, Nersessian, and Thagard 1999; Magnani and Nersessian 2002; and Magnani and Casadio 2016).

Learning about a model happens in two places: in the construction of the model and in its manipulation (Morgan 1999). There are no fixed rules or recipes for model building and so the very activity of figuring out what fits together, and how, affords an opportunity to learn about the model. Once the model is built, we do not learn about its properties by looking at it; we have to use and manipulate the model in order to elicit its secrets.

Depending on what kind of model we are dealing with, building and manipulating a model amount to different activities demanding different methodologies. Material models seem to be straightforward because they are used in common experimental contexts (e.g., we put the model of a car in the wind tunnel and measure its air resistance). Hence, as far as learning about the model is concerned, material models do not give rise to questions that go beyond questions concerning experimentation more generally.

Not so with fictional and abstract models. What constraints are there to the construction of fictional and abstract models, and how do we manipulate them? A natural response seems to be that we do this by performing a thought experiment. Different authors (e.g., Brown 1991; Gendler 2000; Norton 1991; Reiss 2003; Sorensen 1992) have explored this line of argument, but they have reached very different and often conflicting conclusions about how thought experiments are performed and what the status of their outcomes is (for details, see the entry on thought experiments ).

An important class of models is computational in nature. For some models it is possible to derive results or solve equations of a mathematical model analytically. But quite often this is not the case. It is at this point that computers have a great impact, because they allow us to solve problems that are otherwise intractable. Hence, computational methods provide us with knowledge about (the consequences of) a model where analytical methods remain silent. Many parts of current research in both the natural and social sciences rely on computer simulations, which help scientists to explore the consequences of models that cannot be investigated otherwise. The formation and development of stars and galaxies, the dynamics of high-energy heavy-ion reactions, the evolution of life, outbreaks of wars, the progression of an economy, moral behavior, and the consequences of decision procedures in an organization are explored with computer simulations, to mention only a few examples.

Computer simulations are also heuristically important. They can suggest new theories, models, and hypotheses, for example, based on a systematic exploration of a model’s parameter space (Hartmann 1996). But computer simulations also bear methodological perils. For example, they may provide misleading results because, due to the discrete nature of the calculations carried out on a digital computer, they only allow for the exploration of a part of the full parameter space, and this subspace need not reflect every important feature of the model. The severity of this problem is somewhat mitigated by the increasing power of modern computers. But the availability of more computational power can also have adverse effects: it may encourage scientists to swiftly come up with increasingly complex but conceptually premature models, involving poorly understood assumptions or mechanisms and too many additional adjustable parameters (for a discussion of a related problem in the social sciences, see Braun and Saam 2015: Ch. 3). This can lead to an increase in empirical adequacy—which may be welcome for certain forecasting tasks—but not necessarily to a better understanding of the underlying mechanisms. As a result, the use of computer simulations can change the weight we assign to the various goals of science. Finally, the availability of computer power may seduce scientists into making calculations that do not have the degree of trustworthiness one would expect them to have. This happens, for instance, when computers are used to propagate probability distributions forward in time, which can turn out to be misleading (see Frigg et al. 2014). So it is important not to be carried away by the means that new powerful computers offer and lose sight of the actual goals of research. For a discussion of further issues in connection with computer simulations, we refer the reader to the entry on computer simulations in science .

Once we have knowledge about the model, this knowledge has to be “translated” into knowledge about the target system. It is at this point that the representational function of models becomes important again: if a model represents, then it can instruct us about reality because (at least some of) the model’s parts or aspects have corresponding parts or aspects in the world. But if learning is connected to representation and if there are different kinds of representations (analogies, idealizations, etc.), then there are also different kinds of learning. If, for instance, we have a model we take to be a realistic depiction, the transfer of knowledge from the model to the target is accomplished in a different manner than when we deal with an analogue, or a model that involves idealizing assumptions. For a discussion of the different ways in which the representational function of models can be exploited to learn about the target, we refer the reader to the entry Scientific Representation .

Some models explain. But how can they fulfill this function given that they typically involve idealizations? Do these models explain despite or because of the idealizations they involve? Does an explanatory use of models presuppose that they represent, or can non-representational models also explain? And what kind of explanation do models provide?

There is a long tradition requesting that the explanans of a scientific explanation must be true. We find this requirement in the deductive-nomological model (Hempel 1965) as well as in the more recent literature. For instance, Strevens (2008: 297) claims that “no causal account of explanation … allows nonveridical models to explain”. For further discussions, see also Colombo et al. (2015).

Authors working in this tradition deny that idealizations make a positive contribution to explanation and explore how models can explain despite being idealized. McMullin (1968, 1985) argues that a causal explanation based on an idealized model leaves out only features which are irrelevant for the respective explanatory task (see also Salmon 1984 and Piccinini and Craver 2011 for a discussion of mechanism sketches). Friedman (1974) argues that a more realistic (and hence less idealized) model explains better on the unification account. The idea is that idealizations can (at least in principle) be de-idealized (for a critical discussion of this claim in the context of the debate about scientific explanations, see Batterman 2002; Bokulich 2011; Morrison 2005, 2009; Jebeile and Kennedy 2015; and Rice 2015). Strevens (2008) argues that an explanatory causal model has to provide an accurate representation of the relevant causal relationships or processes which the model shares with the target system. The idealized assumptions of a model do not make a difference for the phenomenon under consideration and are therefore explanatorily irrelevant. In contrast, both Potochnik (2017) and Rice (2015) argue that models that explain can directly distort many difference-making causes.

According to Woodward’s (2003) theory, models are tools to find out about the causal relations that hold between certain facts or processes, and it is these relations that do the explanatory work. More specifically, explanations provide information about patterns of counterfactual dependence between the explanans and the explanandum which

enable us to see what sort of difference it would have made for the explanandum if the factors cited in the explanans had been different in various possible ways. (Woodward 2003: 11)

Accounts of causal explanation have also led to various claims about how idealized models can provide explanations, exploring to what extent idealization allows for the misrepresentation of irrelevant causal factors by the explanatory model (Elgin and Sober 2002; Strevens 2004, 2008; Potochnik 2007; Weisberg 2007, 2013). However, having the causally relevant features in common with real systems continues to play the essential role in showing how idealized models can be explanatory.

But is it really the truth of the explanans that makes the model explanatory? Other authors pursue a more radical line and argue that false models explain not only despite their falsity, but in fact because of their falsity. Cartwright (1983: 44) maintains that “the truth doesn’t explain much”. In her so-called “simulacrum account of explanation”, she suggests that we explain a phenomenon by constructing a model that fits the phenomenon into the basic framework of a grand theory (1983: Ch. 8). On this account, the model itself is the explanation we seek. This squares well with basic scientific intuitions, but it leaves us with the question of what notion of explanation is at work (see also Elgin and Sober 2002) and of what explanatory function idealizations play in model explanations (Rice 2018, 2019). Wimsatt (2007: Ch. 6) stresses the role of false models as means to arrive at true theories. Batterman and Rice (2014) argue that models explain because the details that characterize specific systems do not matter for the explanation. Bokulich (2008, 2009, 2011, 2012) pursues a similar line of reasoning and sees the explanatory power of models as being closely related to their fictional nature. Bokulich (2009) and Kennedy (2012) present non-representational accounts of model explanation (see also Jebeile and Kennedy 2015). Reiss (2012) and Woody (2004) provide general discussions of the relationship between representation and explanation.

Many authors have pointed out that understanding is one of the central goals of science (see, for instance, de Regt 2017; Elgin 2017; Khalifa 2017; Potochnik 2017). In some cases, we want to understand a certain phenomenon (e.g., why the sky is blue); in other cases, we want to understand a specific scientific theory (e.g., quantum mechanics) that accounts for a phenomenon in question. Sometimes we gain understanding of a phenomenon by understanding the corresponding theory or model. For instance, Maxwell’s theory of electromagnetism helps us understand why the sky is blue. It is, however, controversial whether understanding a phenomenon always presupposes an understanding of the corresponding theory (de Regt 2009: 26).

Although there are many different ways of gaining understanding, models and the activity of scientific modeling are of particular importance here (de Regt et al. 2009; Morrison 2009; Potochnik 2017; Rice 2016). This insight can be traced back at least to Lord Kelvin who, in his famous 1884 Baltimore Lectures on Molecular Dynamics and the Wave Theory of Light , maintained that “the test of ‘Do we or do we not understand a particular subject in physics?’ is ‘Can we make a mechanical model of it?’” (Kelvin 1884 [1987: 111]; see also Bailer-Jones 2009: Ch. 2; and de Regt 2017: Ch. 6).

But why do models play such a crucial role in the understanding of a subject matter? Elgin (2017) argues that this is not despite, but because, of models being literally false. She views false models as “felicitous falsehoods” that occupy center stage in the epistemology of science, and mentions the ideal-gas model in statistical mechanics and the Hardy–Weinberg model in genetics as examples for literally false models that are central to their respective disciplines. Understanding is holistic and it concerns a topic, a discipline, or a subject matter, rather than isolated claims or facts. Gaining understanding of a context means to have

an epistemic commitment to a comprehensive, systematically linked body of information that is grounded in fact, is duly responsive to reasons or evidence, and enables nontrivial inference, argument, and perhaps action regarding the topic the information pertains to (Elgin 2017: 44)

and models can play a crucial role in the pursuit of these epistemic commitments. For a discussion of Elgin’s account of models and understanding, see Baumberger and Brun (2017) and Frigg and Nguyen (forthcoming).

Elgin (2017), Lipton (2009), and Rice (2016) all argue that models can be used to understand independently of their ability to provide an explanation. Other authors, among them Strevens (2008, 2013), argue that understanding presupposes a scientific explanation and that

an individual has scientific understanding of a phenomenon just in case they grasp a correct scientific explanation of that phenomenon. (Strevens 2013: 510; see, however, Sullivan and Khalifa 2019)

On this account, understanding consists in a particular form of epistemic access an individual scientist has to an explanation. For Strevens this aspect is “grasping”, while for de Regt (2017) it is “intelligibility”. It is important to note that both Strevens and de Regt hold that such “subjective” aspects are a worthy topic for investigations in the philosophy of science. This contrasts with the traditional view (see, e.g., Hempel 1965) that delegates them to the realm of psychology. See Friedman (1974), Trout (2002), and Reutlinger et al. (2018) for further discussions of understanding.

Besides the functions already mentioned, it has been emphasized variously that models perform a number of other cognitive functions. Knuuttila (2005, 2011) argues that the epistemic value of models is not limited to their representational function, and develops an account that views models as epistemic artifacts which allow us to gather knowledge in diverse ways. Nersessian (1999, 2010) stresses the role of analogue models in concept-formation and other cognitive processes. Hartmann (1995) and Leplin (1980) discuss models as tools for theory construction and emphasize their heuristic and pedagogical value. Epstein (2008) lists a number of specific functions of models in the social sciences. Peschard (2011) investigates the way in which models may be used to construct other models and generate new target systems. And Isaac (2013) discusses non-explanatory uses of models which do not rely on their representational capacities.

4. Models and Theory

An important question concerns the relation between models and theories. There is a full spectrum of positions ranging from models being subordinate to theories to models being independent of theories.

To discuss the relation between models and theories in science it is helpful to briefly recapitulate the notions of a model and of a theory in logic. A theory is taken to be a (usually deductively closed) set of sentences in a formal language. A model is a structure (in the sense introduced in Section 2.3 ) that makes all sentences of a theory true when its symbols are interpreted as referring to objects, relations, or functions of a structure. The structure is a model of the theory in the sense that it is correctly described by the theory (see Bell and Machover 1977 or Hodges 1997 for details). Logical models are sometimes also referred to as “models of theory” to indicate that they are interpretations of an abstract formal system.

Models in science sometimes carry over from logic the idea of being the interpretation of an abstract calculus (Hesse 1967). This is salient in physics, where general laws—such as Newton’s equation of motion—lie at the heart of a theory. These laws are applied to a particular system—e.g., a pendulum—by choosing a special force function, making assumptions about the mass distribution of the pendulum etc. The resulting model then is an interpretation (or realization) of the general law.

It is important to keep the notions of a logical and a representational model separate (Thomson-Jones 2006): these are distinct concepts. Something can be a logical model without being a representational model, and vice versa . This, however, does not mean that something cannot be a model in both senses at once. In fact, as Hesse (1967) points out, many models in science are both logical and representational models. Newton’s model of planetary motion is a case in point: the model, consisting of two homogeneous perfect spheres located in otherwise empty space that attract each other gravitationally, is simultaneously a logical model (because it makes the axioms of Newtonian mechanics true when they are interpreted as referring to the model) and a representational model (because it represents the real sun and earth).

There are two main conceptions of scientific theories, the so-called syntactic view of theories and the so-called semantic view of theories (see the entry on the structure of scientific theories ). On both conceptions models play a subsidiary role to theories, albeit in very different ways. The syntactic view of theories (see entry section on the syntactic view ) retains the logical notions of a model and a theory. It construes a theory as a set of sentences in an axiomatized logical system, and a model as an alternative interpretation of a certain calculus (Braithwaite 1953; Campbell 1920 [1957]; Nagel 1961; Spector 1965). If, for instance, we take the mathematics used in the kinetic theory of gases and reinterpret the terms of this calculus in a way that makes them refer to billiard balls, the billiard balls are a model of the kinetic theory of gases in the sense that all sentences of the theory come out true. The model is meant to be something that we are familiar with, and it serves the purpose of making an abstract formal calculus more palpable. A given theory can have different models, and which model we choose depends both on our aims and our background knowledge. Proponents of the syntactic view disagree about the importance of models. Carnap and Hempel thought that models only serve a pedagogic or aesthetic purpose and are ultimately dispensable because all relevant information is contained in the theory (Carnap 1938; Hempel 1965; see also Bailer-Jones 1999). Nagel (1961) and Braithwaite (1953), on the other hand, emphasize the heuristic role of models, and Schaffner (1969) submits that theoretical terms get at least part of their meaning from models.

The semantic view of theories (see entry section on the semantic view ) dispenses with sentences in an axiomatized logical system and construes a theory as a family of models. On this view, a theory literally is a class, cluster, or family of models—models are the building blocks of which scientific theories are made up. Different versions of the semantic view work with different notions of a model, but, as noted in Section 2.3 , in the semantic view models are mostly construed as set-theoretic structures. For a discussion of the different options, we refer the reader to the relevant entry in this encyclopedia (linked at the beginning of this paragraph).

In both the syntactic and the semantic view of theories models are seen as subordinate to theory and as playing no role outside the context of a theory. This vision of models has been challenged in a number of ways, with authors pointing out that models enjoy various degrees of freedom from theory and function autonomously in many contexts. Independence can take many forms, and large parts of the literature on models are concerned with investigating various forms of independence.

Models as completely independent of theory . The most radical departure from a theory-centered analysis of models is the realization that there are models that are completely independent from any theory. An example of such a model is the Lotka–Volterra model. The model describes the interaction of two populations: a population of predators and one of prey animals (Weisberg 2013). The model was constructed using only relatively commonsensical assumptions about predators and prey and the mathematics of differential equations. There was no appeal to a theory of predator–prey interactions or a theory of population growth, and the model is independent of theories about its subject matter. If a model is constructed in a domain where no theory is available, then the model is sometimes referred to as a “substitute model” (Groenewold 1961), because the model substitutes a theory.

Models as a means to explore theory . Models can also be used to explore theories (Morgan and Morrison 1999). An obvious way in which this can happen is when a model is a logical model of a theory (see Section 4.1 ). A logical model is a set of objects and properties that make a formal sentence true, and so one can see in the model how the axioms of the theory play out in a particular setting and what kinds of behavior they dictate. But not all models that are used to explore theories are logical models, and models can represent features of theories in other ways. As an example, consider chaos theory. The equations of non-linear systems, such as those describing the three-body problem, have solutions that are too complex to study with paper-and-pencil methods, and even computer simulations are limited in various ways. Abstract considerations about the qualitative behavior of solutions show that there is a mechanism that has been dubbed “stretching and folding” (see the entry Chaos ). To obtain an idea of the complexity of the dynamics exhibiting stretching and folding, Smale proposed to study a simple model of the flow—now known as the “horseshoe map” (Tabor 1989)—which provides important insights into the nature of stretching and folding. Other examples of models of that kind are the Kac ring model that is used to study equilibrium properties of systems in statistical mechanics (Lavis 2008) and Norton’s dome in Newtonian mechanics (Norton 2003).

Models as complements of theories . A theory may be incompletely specified in the sense that it only imposes certain general constraints but remains silent about the details of concrete situations, which are provided by a model (Redhead 1980). A special case of this situation is when a qualitative theory is known and the model introduces quantitative measures (Apostel 1961). Redhead’s example of a theory that is underdetermined in this way is axiomatic quantum field theory, which only imposes certain general constraints on quantum fields but does not provide an account of particular fields. Harré (2004) notes that models can complement theories by providing mechanisms for processes that are left unspecified in the theory even though they are responsible for bringing about the observed phenomena.

Theories may be too complicated to handle. In such cases a model can complement a theory by providing a simplified version of the theoretical scenario that allows for a solution. Quantum chromodynamics, for instance, cannot easily be used to investigate the physics of an atomic nucleus even though it is the relevant fundamental theory. To get around this difficulty, physicists construct tractable phenomenological models (such as the MIT bag model) which effectively describe the relevant degrees of freedom of the system under consideration (Hartmann 1999, 2001). The advantage of these models is that they yield results where theories remain silent. Their drawback is that it is often not clear how to understand the relationship between the model and the theory, as the two are, strictly speaking, contradictory.

Models as preliminary theories . The notion of a model as a substitute for a theory is closely related to the notion of a developmental model . This term was coined by Leplin (1980), who pointed out how useful models were in the development of early quantum theory, and it is now used as an umbrella notion covering cases in which models are some sort of a preliminary exercise to theory.

Also closely related is the notion of a probing model (or “study model”). Models of this kind do not perform a representational function and are not expected to instruct us about anything beyond the model itself. The purpose of these models is to test new theoretical tools that are used later on to build representational models. In field theory, for instance, the so-called φ 4 -model was studied extensively, not because it was believed to represent anything real, but because it served several heuristic functions: the simplicity of the φ 4 -model allowed physicists to “get a feeling” for what quantum field theories are like and to extract some general features that this simple model shared with more complicated ones. Physicists could study complicated techniques such as renormalization in a simple setting, and it was possible to get acquainted with important mechanisms—in this case symmetry-breaking—that could later be used in different contexts (Hartmann 1995). This is true not only for physics. As Wimsatt (1987, 2007) points out, a false model in genetics can perform many useful functions, among them the following: the false model can help answering questions about more realistic models, provide an arena for answering questions about properties of more complex models, “factor out” phenomena that would not otherwise be seen, serve as a limiting case of a more general model (or two false models may define the extremes of a continuum of cases on which the real case is supposed to lie), or lead to the identification of relevant variables and the estimation of their values.

Interpretative models . Cartwright (1983, 1999) argues that models do not only aid the application of theories that are somehow incomplete; she claims that models are also involved whenever a theory with an overarching mathematical structure is applied. The main theories in physics—classical mechanics, electrodynamics, quantum mechanics, and so on—fall into this category. Theories of that kind are formulated in terms of abstract concepts that need to be concretized for the theory to provide a description of the target system, and concretizing the relevant concepts, idealized objects and processes are introduced. For instance, when applying classical mechanics, the abstract concept of force has to be replaced with a concrete force such as gravity. To obtain tractable equations, this procedure has to be applied to a simplified scenario, for instance that of two perfectly spherical and homogeneous planets in otherwise empty space, rather than to reality in its full complexity. The result is an interpretative model , which grounds the application of mathematical theories to real-world targets. Such models are independent from theory in that the theory does not determine their form, and yet they are necessary for the application of the theory to a concrete problem.

Models as mediators . The relation between models and theories can be complicated and disorderly. The contributors to a programmatic collection of essays edited by Morgan and Morrison (1999) rally around the idea that models are instruments that mediate between theories and the world. Models are “autonomous agents” in that they are independent from both theories and their target systems, and it is this independence that allows them to mediate between the two. Theories do not provide us with algorithms for the construction of a model; they are not “vending machines” into which one can insert a problem and a model pops out (Cartwright 1999). The construction of a model often requires detailed knowledge about materials, approximation schemes, and the setup, and these are not provided by the corresponding theory. Furthermore, the inner workings of a model are often driven by a number of different theories working cooperatively. In contemporary climate modeling, for instance, elements of different theories—among them fluid dynamics, thermodynamics, electromagnetism—are put to work cooperatively. What delivers the results is not the stringent application of one theory, but the voices of different theories when put to use in chorus with each other in one model.

In complex cases like the study of a laser system or the global climate, models and theories can get so entangled that it becomes unclear where a line between the two should be drawn: where does the model end and the theory begin? This is not only a problem for philosophical analysis; it also arises in scientific practice. Bailer-Jones (2002) interviewed a group of physicists about their understanding of models and their relation to theories, and reports widely diverging views: (i) there is no substantive difference between model and theory; (ii) models become theories when their degree of confirmation increases; (iii) models contain simplifications and omissions, while theories are accurate and complete; (iv) theories are more general than models, and modeling is about applying general theories to specific cases. The first suggestion seems to be too radical to do justice to many aspects of practice, where a distinction between models and theories is clearly made. The second view is in line with common parlance, where the terms “model” and “theory” are sometimes used to express someone’s attitude towards a particular hypothesis. The phrase “it’s just a model” indicates that the hypothesis at stake is asserted only tentatively or is even known to be false, while something is awarded the label “theory” if it has acquired some degree of general acceptance. However, this use of “model” is different from the uses we have seen in Sections 1 to 3 and is therefore of no use if we aim to understand the relation between scientific models and theories (and, incidentally, one can equally dismiss speculative claims as being “just a theory”). The third proposal is correct in associating models with idealizations and simplifications, but it overshoots by restricting this to models; in fact, also theories can contain idealizations and simplifications. The fourth view seems closely aligned with interpretative models and the idea that models are mediators, but being more general is a gradual notion and hence does not provide a clear-cut criterion to distinguish between theories and models.

5. Models and Other Debates in the Philosophy of Science

The debate over scientific models has important repercussions for other issues in the philosophy of science (for a historical account of the philosophical discussion about models, see Bailer-Jones 1999). Traditionally, the debates over, say, scientific realism, reductionism, and laws of nature were couched in terms of theories, because theories were seen as the main carriers of scientific knowledge. Once models are acknowledged as occupying an important place in the edifice of science, these issues have to be reconsidered with a focus on models. The question is whether, and if so how, discussions of these issues change when we shift focus from theories to models. Up to now, no comprehensive model-based account of any of these issues has emerged, but models have left important traces in the discussions of these topics.

As we have seen in Section 1 , models typically provide a distorted representation of their targets. If one sees science as primarily model-based, this could be taken to suggest an antirealist interpretation of science. Realists, however, deny that the presence of idealizations in models renders a realist approach to science impossible and point out that a good model, while not literally true, is usually at least approximately true, and/or that it can be improved by de-idealization (Laymon 1985; McMullin 1985; Nowak 1979; Brzezinski and Nowak 1992).

Apart from the usual worries about the elusiveness of the notion of approximate truth (for a discussion, see the entry on truthlikeness ), antirealists have taken issue with this reply for two (related) reasons. First, as Cartwright (1989) points out, there is no reason to assume that one can always improve a model by adding de-idealizing corrections. Second, it seems that de-idealization is not in accordance with scientific practice because it is unusual that scientists invest work in repeatedly de-idealizing an existing model. Rather, they shift to a different modeling framework once the adjustments to be made get too involved (Hartmann 1998). The various models of the atomic nucleus are a case in point: once it was realized that shell effects are important to understand various subatomic phenomena, the (collective) liquid-drop model was put aside and the (single-particle) shell model was developed to account for the corresponding findings. A further difficulty with de-idealization is that most idealizations are not “controlled”. For example, it is not clear in what way one could de-idealize the MIT bag model to eventually arrive at quantum chromodynamics, the supposedly correct underlying theory.

A further antirealist argument, the “incompatible-models argument”, takes as its starting point the observation that scientists often successfully use several incompatible models of one and the same target system for predictive purposes (Morrison 2000). These models seemingly contradict each other, as they ascribe different properties to the same target system. In nuclear physics, for instance, the liquid-drop model explores the analogy of the atomic nucleus with a (charged) fluid drop, while the shell model describes nuclear properties in terms of the properties of protons and neutrons, the constituents of an atomic nucleus. This practice appears to cause a problem for scientific realism: Realists typically hold that there is a close connection between the predictive success of a theory and its being at least approximately true. But if several models of the same system are predictively successful and if these models are mutually inconsistent, then it is difficult to maintain that they are all approximately true.

Realists can react to this argument in various ways. First, they can challenge the claim that the models in question are indeed predictively successful. If the models are not good predictors, then the argument is blocked. Second, they can defend a version of “perspectival realism” (Giere 2006; Massimi 2017; Rueger 2005). Proponents of this position (which is sometimes also called “perspectivism”) situate it somewhere between “standard” scientific realism and antirealism, and where exactly the right middle position lies is the subject matter of active debate (Massimi 2018a,b; Saatsi 2016; Teller 2018; and the contributions to Massimi and McCoy 2019). Third, realists can deny that there is a problem in the first place, because scientific models, which are always idealized and therefore strictly speaking false, are just the wrong vehicle to make a point about realism (which should be discussed in terms of theories).

A particular focal point of the realism debate are laws of nature, where the questions arise what laws are and whether they are truthfully reflected in our scientific representations. According to the two currently dominant accounts, the best-systems approach and the necessitarian approach, laws of nature are understood to be universal in scope, meaning that they apply to everything that there is in the world (for discussion of laws, see the entry on laws of nature ). This take on laws does not seem to sit well with a view that places models at the center of scientific research. What role do general laws play in science if it is models that represent what is happening in the world? And how are models and laws related?

One possible response to these questions is to argue that laws of nature govern entities and processes in a model rather than in the world. Fundamental laws, on this approach, do not state facts about the world but hold true of entities and processes in the model. This view has been advocated in different variants: Cartwright (1983) argues that all laws are ceteris paribus laws. Cartwright (1999) makes use of “capacities” (which she considers to be prior to laws) and introduces the notion of a “nomological machine”. This is

a fixed (enough) arrangement of components, or factors, with stable (enough) capacities that in the right sort of stable (enough) environment will, with repeated operation, give rise to the kind of regular behavior that we represent in our scientific laws. (1999: 50; see also the entry on ceteris paribus laws )

Giere (1999) argues that the laws of a theory are better thought of, not as encoding general truths about the world, but rather as open-ended statements that can be filled in various ways in the process of building more specific scientific models. Similar positions have also been defended by Teller (2001) and van Fraassen (1989).

The multiple-models problem mentioned in Section 5.1 also raises the question of how different models are related. Evidently, multiple models for the same target system do not generally stand in a deductive relationship, as they often contradict each other. Some (Cartwright 1999; Hacking 1983) have suggested a picture of science according to which there are no systematic relations that hold between different models. Some models are tied together because they represent the same target system, but this does not imply that they enter into any further relationships (deductive or otherwise). We are confronted with a patchwork of models, all of which hold ceteris paribus in their specific domains of applicability.

Some argue that this picture is at least partially incorrect because there are various interesting relations that hold between different models or theories. These relations range from thoroughgoing reductive relations (Scheibe 1997, 1999, 2001: esp. Chs. V.23 and V.24) and controlled approximations over singular limit relations (Batterman 2001 [2016]) to structural relations (Gähde 1997) and rather loose relations called “stories” (Hartmann 1999; see also Bokulich 2003; Teller 2002; and the essays collected in Part III of Hartmann et al. 2008). These suggestions have been made on the basis of case studies, and it remains to be seen whether a more general account of these relations can be given and whether a deeper justification for them can be provided, for instance, within a Bayesian framework (first steps towards a Bayesian understanding of reductive relations can be found in Dizadji-Bahmani et al. 2011; Liefke and Hartmann 2018; and Tešić 2019).

Models also figure in the debate about reduction and emergence in physics. Here, some authors argue that the modern approach to renormalization challenges Nagel’s (1961) model of reduction or the broader doctrine of reductions (for a critical discussion, see, for instance, Batterman 2002, 2010, 2011; Morrison 2012; and Saatsi and Reutlinger 2018). Dizadji-Bahmani et al. (2010) provide a defense of the Nagel–Schaffner model of reduction, and Butterfield (2011a,b, 2014) argues that renormalization is consistent with Nagelian reduction. Palacios (2019) shows that phase transitions are compatible with reductionism, and Hartmann (2001) argues that the effective-field-theories research program is consistent with reductionism (see also Bain 2013 and Franklin forthcoming). Rosaler (2015) argues for a “local” form of reduction which sees the fundamental relation of reduction holding between models, not theories, which is, however, compatible with the Nagel–Schaffner model of reduction. See also the entries on intertheory relations in physics and scientific reduction .

In the social sciences, agent-based models (ABMs) are increasingly used (Klein et al. 2018). These models show how surprisingly complex behavioral patterns at the macro-scale can emerge from a small number of simple behavioral rules for the individual agents and their interactions. This raises questions similar to the questions mentioned above about reduction and emergence in physics, but so far one only finds scattered remarks about reduction in the literature. See Weisberg and Muldoon (2009) and Zollman (2007) for the application of ABMs to the epistemology and the social structure of science, and Colyvan (2013) for a discussion of methodological questions raised by normative models in general.

  • Achinstein, Peter, 1968, Concepts of Science: A Philosophical Analysis , Baltimore, MD: Johns Hopkins Press.
  • Akerlof, George A., 1970, “The Market for ‘Lemons’: Quality Uncertainty and the Market Mechanism”, The Quarterly Journal of Economics , 84(3): 488–500. doi:10.2307/1879431
  • Apostel, Leo, 1961, “Towards the Formal Study of Models in the Non-Formal Sciences”, in Freudenthal 1961: 1–37. doi:10.1007/978-94-010-3667-2_1
  • Bailer-Jones, Daniela M., 1999, “Tracing the Development of Models in the Philosophy of Science”, in Magnani, Nersessian, and Thagard 1999: 23–40. doi:10.1007/978-1-4615-4813-3_2
  • –––, 2002, “Scientists’ Thoughts on Scientific Models”, Perspectives on Science , 10(3): 275–301. doi:10.1162/106361402321899069
  • –––, 2009, Scientific Models in Philosophy of Science , Pittsburgh, PA: University of Pittsburgh Press.
  • Bailer-Jones, Daniela M. and Coryn A. L. Bailer-Jones, 2002, “Modeling Data: Analogies in Neural Networks, Simulated Annealing and Genetic Algorithms”, in Magnani and Nersessian 2002: 147–165. doi:10.1007/978-1-4615-0605-8_9
  • Bain, Jonathan, 2013, “Emergence in Effective Field Theories”, European Journal for Philosophy of Science , 3(3): 257–273. doi:10.1007/s13194-013-0067-0
  • Bandyopadhyay, Prasanta S. and Malcolm R. Forster (eds.), 2011, Philosophy of Statistics (Handbook of the Philosophy of Science 7), Amsterdam: Elsevier.
  • Barberousse, Anouk and Pascal Ludwig, 2009, “Fictions and Models”, in Suárez 2009: 56–75.
  • Bartha, Paul, 2010, By Parallel Reasoning: The Construction and Evaluation of Analogical Arguments , New York: Oxford University Press. doi:10.1093/acprof:oso/9780195325539.001.0001
  • –––, 2013 [2019], “Analogy and Analogical Reasoning”, in Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Spring 2019 Edition). URL = < https://plato.stanford.edu/archives/spr2019/entries/reasoning-analogy/ >
  • Batterman, Robert W., 2002, The Devil in the Details: Asymptotic Reasoning in Explanation, Reduction, and Emergence , Oxford: Oxford University Press. doi:10.1093/0195146476.001.0001
  • –––, 2001 [2016], “Intertheory Relations in Physics”, in Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Fall 2016 Edition). URL = < https://plato.stanford.edu/archives/fall2016/entries/physics-interrelate >
  • –––, 2010, “Reduction and Renormalization”, in Gerhard Ernst and Andreas Hüttemann (eds.), Time, Chance and Reduction: Philosophical Aspects of Statistical Mechanics , Cambridge: Cambridge University Press, pp. 159–179.
  • –––, 2011, “Emergence, Singularities, and Symmetry Breaking”, Foundations of Physics , 41(6): 1031–1050. doi:10.1007/s10701-010-9493-4
  • Batterman, Robert W. and Collin C. Rice, 2014, “Minimal Model Explanations”, Philosophy of Science , 81(3): 349–376. doi:10.1086/676677
  • Baumberger, Christoph and Georg Brun, 2017, “Dimensions of Objectual Understanding”, in Stephen R. Grimm, Christoph Baumberger, and Sabine Ammon (eds.), Explaining Understanding: New Perspectives from Epistemology and Philosophy of Science , New York: Routledge, pp. 165–189.
  • Bell, John and Moshé Machover, 1977, A Course in Mathematical Logic , Amsterdam: North-Holland.
  • Berry, Michael, 2002, “Singular Limits”, Physics Today , 55(5): 10–11. doi:10.1063/1.1485555
  • Black, Max, 1962, Models and Metaphors: Studies in Language and Philosophy , Ithaca, NY: Cornell University Press.
  • Bogen, James and James Woodward, 1988, “Saving the Phenomena”, The Philosophical Review , 97(3): 303–352. doi:10.2307/2185445
  • Bokulich, Alisa, 2003, “Horizontal Models: From Bakers to Cats”, Philosophy of Science , 70(3): 609–627. doi:10.1086/376927
  • –––, 2008, Reexamining the Quantum–Classical Relation: Beyond Reductionism and Pluralism , Cambridge: Cambridge University Press. doi:10.1017/CBO9780511751813
  • –––, 2009, “Explanatory Fictions”, in Suárez 2009: 91–109.
  • –––, 2011, “How Scientific Models Can Explain”, Synthese , 180(1): 33–45. doi:10.1007/s11229-009-9565-1
  • –––, 2012, “Distinguishing Explanatory from Nonexplanatory Fictions”, Philosophy of Science , 79(5): 725–737. doi:10.1086/667991
  • Braithwaite, Richard, 1953, Scientific Explanation , Cambridge: Cambridge University Press.
  • Braun, Norman and Nicole J. Saam (eds.), 2015, Handbuch Modellbildung und Simulation in den Sozialwissenschaften , Wiesbaden: Springer Fachmedien. doi:10.1007/978-3-658-01164-2
  • Brewer, William F. and Clark A. Chinn, 1994, “Scientists’ Responses to Anomalous Data: Evidence from Psychology, History, and Philosophy of Science”, in PSA 1994: Proceedings of the 1994 Biennial Meeting of the Philosophy of Science Association , Vol. 1, pp. 304–313. doi:10.1086/psaprocbienmeetp.1994.1.193035
  • Brown, James, 1991, The Laboratory of the Mind: Thought Experiments in the Natural Sciences , London: Routledge.
  • Brzezinski, Jerzy and Leszek Nowak (eds.), 1992, Idealization III: Approximation and Truth , Amsterdam: Rodopi.
  • Butterfield, Jeremy, 2011a, “Emergence, Reduction and Supervenience: A Varied Landscape”, Foundations of Physics , 41(6): 920–959. doi:10.1007/s10701-011-9549-0
  • –––, 2011b, “Less Is Different: Emergence and Reduction Reconciled”, Foundations of Physics , 41(6): 1065–1135. doi:10.1007/s10701-010-9516-1
  • –––, 2014, “Reduction, Emergence, and Renormalization”, Journal of Philosophy , 111(1): 5–49. doi:10.5840/jphil201411111
  • Callender, Craig and Jonathan Cohen, 2006, “There Is No Special Problem about Scientific Representation”, Theoria , 55(1): 67–85.
  • Campbell, Norman, 1920 [1957], Physics: The Elements , Cambridge: Cambridge University Press. Reprinted as Foundations of Science , New York: Dover, 1957.
  • Carnap, Rudolf, 1938, “Foundations of Logic and Mathematics”, in Otto Neurath, Charles Morris, and Rudolf Carnap (eds.), International Encyclopaedia of Unified Science , Volume 1, Chicago, IL: University of Chicago Press, pp. 139–213.
  • Cartwright, Nancy, 1983, How the Laws of Physics Lie , Oxford: Oxford University Press. doi:10.1093/0198247044.001.0001
  • –––, 1989, Nature’s Capacities and Their Measurement , Oxford: Oxford University Press. doi:10.1093/0198235070.001.0001
  • –––, 1999, The Dappled World: A Study of the Boundaries of Science , Cambridge: Cambridge University Press. doi:10.1017/CBO9781139167093
  • Colombo, Matteo, Stephan Hartmann, and Robert van Iersel, 2015, “Models, Mechanisms, and Coherence”, The British Journal for the Philosophy of Science , 66(1): 181–212. doi:10.1093/bjps/axt043
  • Colyvan, Mark, 2013, “Idealisations in Normative Models”, Synthese , 190(8): 1337–1350. doi:10.1007/s11229-012-0166-z
  • Contessa, Gabriele, 2010, “Scientific Models and Fictional Objects”, Synthese , 172(2): 215–229. doi:10.1007/s11229-009-9503-2
  • Crowther, Karen, Niels S. Linnemann, and Christian Wüthrich, forthcoming, “What We Cannot Learn from Analogue Experiments”, Synthese , first online: 4 May 2019. doi:10.1007/s11229-019-02190-0
  • Da Costa, Newton and Steven French, 2000, “Models, Theories, and Structures: Thirty Years On”, Philosophy of Science , 67(supplement): S116–S127. doi:10.1086/392813
  • Dardashti, Radin, Stephan Hartmann, Karim Thébault, and Eric Winsberg, 2019, “Hawking Radiation and Analogue Experiments: A Bayesian Analysis”, Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics , 67: 1–11. doi:10.1016/j.shpsb.2019.04.004
  • Dardashti, Radin, Karim P. Y. Thébault, and Eric Winsberg, 2017, “Confirmation via Analogue Simulation: What Dumb Holes Could Tell Us about Gravity”, The British Journal for the Philosophy of Science , 68(1): 55–89. doi:10.1093/bjps/axv010
  • de Regt, Henk, 2009, “Understanding and Scientific Explanation”, in de Regt, Leonelli, and Eigner 2009: 21–42.
  • –––, 2017, Understanding Scientific Understanding , Oxford: Oxford University Press. doi:10.1093/oso/9780190652913.001.0001
  • de Regt, Henk, Sabina Leonelli, and Kai Eigner (eds.), 2009, Scientific Understanding: Philosophical Perspectives , Pittsburgh, PA: University of Pittsburgh Press.
  • Dizadji-Bahmani, Foad, Roman Frigg, and Stephan Hartmann, 2010, “Who’s Afraid of Nagelian Reduction?”, Erkenntnis , 73(3): 393–412. doi:10.1007/s10670-010-9239-x
  • –––, 2011, “Confirmation and Reduction: A Bayesian Account”, Synthese , 179(2): 321–338. doi:10.1007/s11229-010-9775-6
  • Downes, Stephen M., 1992, “The Importance of Models in Theorizing: A Deflationary Semantic View”, PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association , 1992(1): 142–153. doi:10.1086/psaprocbienmeetp.1992.1.192750
  • Elgin, Catherine Z., 2010, “Telling Instances”, in Roman Frigg and Matthew Hunter (eds.), Beyond Mimesis and Convention (Boston Studies in the Philosophy of Science 262), Dordrecht: Springer Netherlands, pp. 1–17. doi:10.1007/978-90-481-3851-7_1
  • –––, 2017, True Enough . Cambridge, MA, and London: MIT Press.
  • Elgin, Mehmet and Elliott Sober, 2002, “Cartwright on Explanation and Idealization”, Erkenntnis , 57(3): 441–450. doi:10.1023/A:1021502932490
  • Epstein, Joshua M., 2008, “Why Model?”, Journal of Artificial Societies and Social Simulation , 11(4): 12. [ Epstein 2008 available online ]
  • Fisher, Grant, 2006, “The Autonomy of Models and Explanation: Anomalous Molecular Rearrangements in Early Twentieth-Century Physical Organic Chemistry”, Studies in History and Philosophy of Science Part A , 37(4): 562–584. doi:10.1016/j.shpsa.2006.09.009
  • Franklin, Alexander, forthcoming, “Whence the Effectiveness of Effective Field Theories?”, The British Journal for the Philosophy of Science , first online: 3 August 2018. doi:10.1093/bjps/axy050
  • Freudenthal, Hans (ed.), 1961, The Concept and the Role of the Model in Mathematics and Natural and Social Sciences , Dordrecht: Reidel. doi:10.1007/978-94-010-3667-2
  • Friedman, Michael, 1974, “Explanation and Scientific Understanding”, Journal of Philosophy , 71(1): 5–19. doi:10.2307/2024924
  • Frigg, Roman, 2010a, “Fiction in Science”, in John Woods (ed.), Fictions and Models: New Essays , Munich: Philosophia Verlag, pp. 247–287.
  • –––, 2010b, “Models and Fiction”, Synthese , 172(2): 251–268. doi:10.1007/s11229-009-9505-0
  • Frigg, Roman, Seamus Bradley, Hailiang Du, and Leonard A. Smith, 2014, “Laplace’s Demon and the Adventures of His Apprentices”, Philosophy of Science , 81(1): 31–59. doi:10.1086/674416
  • Frigg, Roman and James Nguyen, 2016, “The Fiction View of Models Reloaded”, The Monist , 99(3): 225–242. doi:10.1093/monist/onw002 [ Frigg and Nguyen 2016 available online ]
  • –––, forthcoming, “Mirrors without Warnings”, Synthese , first online: 21 May 2019. doi:10.1007/s11229-019-02222-9
  • Fumagalli, Roberto, 2016, “Why We Cannot Learn from Minimal Models”, Erkenntnis , 81(3): 433–455. doi:10.1007/s10670-015-9749-7
  • Gähde, Ulrich, 1997, “Anomalies and the Revision of Theory-Elements: Notes on the Advance of Mercury’s Perihelion”, in Maria Luisa Dalla Chiara, Kees Doets, Daniele Mundici, and Johan van Benthem (eds.), Structures and Norms in Science (Synthese Library 260), Dordrecht: Springer Netherlands, pp. 89–104. doi:10.1007/978-94-017-0538-7_6
  • Galison, Peter, 1997, Image and Logic: A Material Culture of Microphysics , Chicago, IL: University of Chicago Press.
  • Gelfert, Axel, 2016, How to Do Science with Models: A Philosophical Primer (Springer Briefs in Philosophy), Cham: Springer International Publishing. doi:10.1007/978-3-319-27954-1
  • Gendler, Tamar Szabó, 2000, Thought Experiment: On the Powers and Limits of Imaginary Cases , New York and London: Garland.
  • Gibbard, Allan and Hal R. Varian, 1978, “Economic Models”, The Journal of Philosophy , 75(11): 664–677. doi:10.5840/jphil1978751111
  • Giere, Ronald N., 1988, Explaining Science: A Cognitive Approach , Chicago, IL: University of Chicago Press.
  • –––, 1999, Science Without Laws , Chicago, IL: University of Chicago Press.
  • –––, 2006, Scientific Perspectivism , Chicago, IL: University of Chicago Press.
  • –––, 2009, “Why Scientific Models Should Not be Regarded as Works of Fiction”, in Suárez 2009: 248–258.
  • –––, 2010, “An Agent-Based Conception of Models and Scientific Representation”, Synthese , 172(2): 269–281. doi:10.1007/s11229-009-9506-z
  • Godfrey-Smith, Peter, 2007, “The Strategy of Model-Based Science”, Biology & Philosophy , 21(5): 725–740. doi:10.1007/s10539-006-9054-6
  • –––, 2009, “Abstractions, Idealizations, and Evolutionary Biology”, in Anouk Barberousse, Michel Morange, and Thomas Pradeu (eds.), Mapping the Future of Biology: Evolving Concepts and Theories (Boston Studies in the Philosophy of Science 266), Dordrecht: Springer Netherlands, pp. 47–56. doi:10.1007/978-1-4020-9636-5_4
  • Groenewold, H. J., 1961, “The Model in Physics”, in Freudenthal 1961: 98–103. doi:10.1007/978-94-010-3667-2_9
  • Grüne-Yanoff, Till, 2009, “Learning from Minimal Economic Models”, Erkenntnis , 70(1): 81–99. doi:10.1007/s10670-008-9138-6
  • Hacking, Ian, 1983, Representing and Intervening: Introductory Topics in the Philosophy of Natural Science , Cambridge: Cambridge University Press. doi:10.1017/CBO9780511814563
  • Hale, Susan C., 1988, “Spacetime and the Abstract/Concrete Distinction”, Philosophical Studies , 53(1): 85–102. doi:10.1007/BF00355677
  • Harré, Rom, 2004, Modeling: Gateway to the Unknown (Studies in Multidisciplinarity 1), ed. by Daniel Rothbart, Amsterdam etc.: Elsevier.
  • Harris, Todd, 2003, “Data Models and the Acquisition and Manipulation of Data”, Philosophy of Science , 70(5): 1508–1517. doi:10.1086/377426
  • Hartmann, Stephan, 1995, “Models as a Tool for Theory Construction: Some Strategies of Preliminary Physics”, in Herfel et al. 1995: 49–67.
  • –––, 1996, “The World as a Process: Simulations in the Natural and Social Sciences”, in Hegselmann, Mueller, and Troitzsch 1996: 77–100. doi:10.1007/978-94-015-8686-3_5
  • –––, 1998, “Idealization in Quantum Field Theory”, in Shanks 1998: 99–122.
  • –––, 1999, “Models and Stories in Hadron Physics”, in Morgan and Morrison 1999: 326–346. doi:10.1017/CBO9780511660108.012
  • –––, 2001, “Effective Field Theories, Reductionism and Scientific Explanation”, Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics , 32(2): 267–304. doi:10.1016/S1355-2198(01)00005-3
  • Hartmann, Stephan, Carl Hoefer, and Luc Bovens (eds.), 2008, Nancy Cartwright’s Philosophy of Science (Routledge Studies in the Philosophy of Science), New York: Routledge.
  • Hegselmann, Rainer, Ulrich Mueller, and Klaus G. Troitzsch (eds.), 1996, Modelling and Simulation in the Social Sciences from the Philosophy of Science Point of View (Theory and Decision Library 23), Dordrecht: Springer Netherlands. doi:10.1007/978-94-015-8686-3
  • Helman, David H. (ed.), 1988, Analogical Reasoning: Perspectives of Artificial Intelligence, Cognitive Science, and Philosophy (Synthese Library 197), Dordrecht: Springer Netherlands. doi:10.1007/978-94-015-7811-0
  • Hempel, Carl G., 1965, Aspects of Scientific Explanation and Other Essays in the Philosophy of Science , New York: Free Press.
  • Herfel, William, Wladiysław Krajewski, Ilkka Niiniluoto, and Ryszard Wojcicki (eds.), 1995, Theories and Models in Scientific Process (Poznań Studies in the Philosophy of Science and the Humanities 44), Amsterdam: Rodopi.
  • Hesse, Mary, 1963, Models and Analogies in Science , London: Sheed and Ward.
  • –––, 1967, “Models and Analogy in Science”, in Paul Edwards (ed.), Encyclopedia of Philosophy , New York: Macmillan, pp. 354–359.
  • –––, 1974, The Structure of Scientific Inference , London: Macmillan.
  • Hodges, Wilfrid, 1997, A Shorter Model Theory , Cambridge: Cambridge University Press.
  • Holyoak, Keith and Paul Thagard, 1995, Mental Leaps: Analogy in Creative Thought , Cambridge, MA: MIT Press.
  • Horowitz, Tamara and Gerald J. Massey (eds.), 1991, Thought Experiments in Science and Philosophy , Lanham, MD: Rowman & Littlefield.
  • Isaac, Alistair M. C., 2013, “Modeling without Representation”, Synthese , 190(16): 3611–3623. doi:10.1007/s11229-012-0213-9
  • Jebeile, Julie and Ashley Graham Kennedy, 2015, “Explaining with Models: The Role of Idealizations”, International Studies in the Philosophy of Science , 29(4): 383–392. doi:10.1080/02698595.2015.1195143
  • Jones, Martin R., 2005, “Idealization and Abstraction: A Framework”, in Jones and Cartwright 2005: 173–217. doi:10.1163/9789401202732_010
  • Jones, Martin R. and Nancy Cartwright (eds.), 2005, Idealization XII: Correcting the Model (Poznań Studies in the Philosophy of the Sciences and the Humanities 86), Amsterdam and New York: Rodopi. doi:10.1163/9789401202732
  • Kelvin, William Thomson, Baron, 1884 [1987], Notes of lectures on molecular dynamics and the wave theory of light. Delivered at the Johns Hopkins University, Baltimore (aka Lord Kelvin’s Baltimore Lectures), A. S. Hathaway (recorder). A revised version was published in 1904, London: C.J. Clay and Sons. Reprint of the 1884 version in Robert Kargon and Peter Achinstein (eds.), Kelvin’s Baltimore Lectures and Modern Theoretical Physics , Cambridge, MA: MIT Press, 1987.
  • Khalifa, Kareem, 2017, Understanding, Explanation, and Scientific Knowledge , Cambridge: Cambridge University Press. doi:10.1017/9781108164276
  • Klein, Dominik, Johannes Marx, and Kai Fischbach, 2018, “Agent-Based Modeling in Social Science History and Philosophy: An Introduction”, Historical Social Research , 43(1): 243–258.
  • Knuuttila, Tarja, 2005, “Models, Representation, and Mediation”, Philosophy of Science , 72(5): 1260–1271. doi:10.1086/508124
  • –––, 2011, “Modelling and Representing: An Artefactual Approach to Model-Based Representation”, Studies in History and Philosophy of Science Part A , 42(2): 262–271. doi:10.1016/j.shpsa.2010.11.034
  • Kroes, Peter, 1989, “Structural Analogies Between Physical Systems”, The British Journal for the Philosophy of Science , 40(2): 145–154. doi:10.1093/bjps/40.2.145
  • Lange, Marc, 2015, “On ‘Minimal Model Explanations’: A Reply to Batterman and Rice”, Philosophy of Science , 82(2): 292–305. doi:10.1086/680488
  • Lavis, David A., 2008, “Boltzmann, Gibbs, and the Concept of Equilibrium”, Philosophy of Science , 75(5): 682–692. doi:10.1086/594514
  • Laymon, Ronald, 1982, “Scientific Realism and the Hierarchical Counterfactual Path from Data to Theory”, PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association , 1982(1): 107–121. doi:10.1086/psaprocbienmeetp.1982.1.192660
  • –––, 1985, “Idealizations and the Testing of Theories by Experimentation”, in Peter Achinstein and Owen Hannaway (eds.), Observation, Experiment, and Hypothesis in Modern Physical Science , Cambridge, MA: MIT Press, pp. 147–173.
  • –––, 1991, “Thought Experiments by Stevin, Mach and Gouy: Thought Experiments as Ideal Limits and Semantic Domains”, in Horowitz and Massey 1991: 167–191.
  • Leonelli, Sabina, 2010, “Packaging Small Facts for Re-Use: Databases in Model Organism Biology”, in Peter Howlett and Mary S. Morgan (eds.), How Well Do Facts Travel? The Dissemination of Reliable Knowledge , Cambridge: Cambridge University Press, pp. 325–348. doi:10.1017/CBO9780511762154.017
  • –––, 2016, Data-Centric Biology: A Philosophical Study , Chicago, IL, and London: University of Chicago Press.
  • –––, 2019, “What Distinguishes Data from Models?”, European Journal for Philosophy of Science , 9(2): article 22. doi:10.1007/s13194-018-0246-0
  • Leonelli, Sabina and Rachel A. Ankeny, 2012, “Re-Thinking Organisms: The Impact of Databases on Model Organism Biology”, Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences , 43(1): 29–36. doi:10.1016/j.shpsc.2011.10.003
  • Leplin, Jarrett, 1980, “The Role of Models in Theory Construction”, in Thomas Nickles (ed.), Scientific Discovery, Logic, and Rationality (Boston Studies in the Philosophy of Science 56), Dordrecht: Springer Netherlands, 267–283. doi:10.1007/978-94-009-8986-3_12
  • Levy, Arnon, 2012, “Models, Fictions, and Realism: Two Packages”, Philosophy of Science , 79(5): 738–748. doi:10.1086/667992
  • –––, 2015, “Modeling without Models”, Philosophical Studies , 172(3): 781–798. doi:10.1007/s11098-014-0333-9
  • Levy, Arnon and Adrian Currie, 2015, “Model Organisms Are Not (Theoretical) Models”, The British Journal for the Philosophy of Science , 66(2): 327–348. doi:10.1093/bjps/axt055
  • Levy, Arnon and Peter Godfrey-Smith (eds.), 2020, The Scientific Imagination: Philosophical and Psychological Perspectives , New York: Oxford University Press.
  • Liefke, Kristina and Stephan Hartmann, 2018, “Intertheoretic Reduction, Confirmation, and Montague’s Syntax–Semantics Relation”, Journal of Logic, Language and Information , 27(4): 313–341. doi:10.1007/s10849-018-9272-8
  • Lipton, Peter, 2009, “Understanding without Explanation”, in de Regt, Leonelli, and Eigner 2009: 43–63.
  • Luczak, Joshua, 2017, “Talk about Toy Models”, Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics , 57: 1–7. doi:10.1016/j.shpsb.2016.11.002
  • Magnani, Lorenzo, 2012, “Scientific Models Are Not Fictions: Model-Based Science as Epistemic Warfare”, in Lorenzo Magnani and Ping Li (eds.), Philosophy and Cognitive Science: Western & Eastern Studies (Studies in Applied Philosophy, Epistemology and Rational Ethics 2), Berlin and Heidelberg: Springer, pp. 1–38. doi:10.1007/978-3-642-29928-5_1
  • Magnani, Lorenzo and Claudia Casadio (eds.), 2016, Model-Based Reasoning in Science and Technology: Logical, Epistemological, and Cognitive Issues (Studies in Applied Philosophy, Epistemology and Rational Ethics 27), Cham: Springer International Publishing. doi:10.1007/978-3-319-38983-7
  • Magnani, Lorenzo and Nancy J. Nersessian (eds.), 2002, Model-Based Reasoning: Science, Technology, Values , Boston, MA: Springer US. doi:10.1007/978-1-4615-0605-8
  • Magnani, Lorenzo, Nancy J. Nersessian, and Paul Thagard (eds.), 1999, Model-Based Reasoning in Scientific Discovery , Boston, MA: Springer US. doi:10.1007/978-1-4615-4813-3
  • Mäki, Uskali, 1994, “Isolation, Idealization and Truth in Economics”, in Bert Hamminga and Neil B. De Marchi (eds.), Idealization VI: Idealization in Economics (Poznań Studies in the Philosophy of the Sciences and the Humanities 38), Amsterdam: Rodopi, pp. 147–168.
  • Massimi, Michela, 2017, “Perspectivism”, in Juha Saatsi (ed.), The Routledge Handbook of Scientific Realism , London: Routledge, pp. 164–175.
  • –––, 2018a, “Four Kinds of Perspectival Truth”, Philosophy and Phenomenological Research , 96(2): 342–359. doi:10.1111/phpr.12300
  • –––, 2018b, “Perspectival Modeling”, Philosophy of Science , 85(3): 335–359. doi:10.1086/697745
  • –––, 2019, “Two Kinds of Exploratory Models”, Philosophy of Science , 86(5): 869–881. doi:10.1086/705494
  • Massimi, Michela and Casey D. McCoy (eds.), 2019, Understanding Perspectivism: Scientific Challenges and Methodological Prospects , New York: Routledge. doi:10.4324/9781315145198
  • Mayo, Deborah, 1996, Error and the Growth of Experimental Knowledge , Chicago, IL: University of Chicago Press.
  • –––, 2018, Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars , Cambridge: Cambridge University Press. doi:10.1017/9781107286184
  • McMullin, Ernan, 1968, “What Do Physical Models Tell Us?”, in B. Van Rootselaar and J. Frits Staal (eds.), Logic, Methodology and Philosophy of Science III (Studies in Logic and the Foundations of Mathematics 52), Amsterdam: North Holland, pp. 385–396. doi:10.1016/S0049-237X(08)71206-0
  • –––, 1985, “Galilean Idealization”, Studies in History and Philosophy of Science Part A , 16(3): 247–273. doi:10.1016/0039-3681(85)90003-2
  • Morgan, Mary S., 1999, “Learning from Models”, in Morgan and Morrison 1999: 347–388. doi:10.1017/CBO9780511660108.013
  • Morgan, Mary S. and Marcel J. Boumans, 2004, “Secrets Hidden by Two-Dimensionality: The Economy as a Hydraulic Machine”, in Soraya de Chadarevian and Nick Hopwood (eds.), Model: The Third Dimension of Science , Stanford, CA: Stanford University Press, pp. 369–401.
  • Morgan, Mary S. and Margaret Morrison (eds.), 1999, Models as Mediators: Perspectives on Natural and Social Science , Cambridge: Cambridge University Press. doi:10.1017/CBO9780511660108
  • Morrison, Margaret, 1999, “Models as Autonomous Agents”, in Morgan and Morrison 1999: 38–65. doi:10.1017/CBO9780511660108.004
  • –––, 2000, Unifying Scientific Theories: Physical Concepts and Mathematical Structures , Cambridge: Cambridge University Press. doi:10.1017/CBO9780511527333
  • –––, 2005, “Approximating the Real: The Role of Idealizations in Physical Theory”, in Jones and Cartwright 2005: 145–172. doi:10.1163/9789401202732_009
  • –––, 2009, “Understanding in Physics and Biology: From the Abstract to the Concrete”, in de Regt, Leonelli, and Eigner 2009: 123–145.
  • –––, 2012, “Emergent Physics and Micro-Ontology”, Philosophy of Science , 79(1): 141–166. doi:10.1086/663240
  • Musgrave, Alan, 1981, “‘Unreal Assumptions’ in Economic Theory: The F-Twist Untwisted”, Kyklos , 34(3): 377–387. doi:10.1111/j.1467-6435.1981.tb01195.x
  • Nagel, Ernest, 1961, The Structure of Science: Problems in the Logic of Scientific Explanation , New York: Harcourt, Brace and World.
  • Nersessian, Nancy J., 1999, “Model-Based Reasoning in Conceptual Change”, in Magnani, Nersessian, and Thagard 1999: 5–22. doi:10.1007/978-1-4615-4813-3_1
  • –––, 2010, Creating Scientific Concepts , Cambridge, MA: MIT Press.
  • Nguyen, James, forthcoming, “It’s Not a Game: Accurate Representation with Toy Models”, The British Journal for the Philosophy of Science , first online: 23 March 2019. doi:10.1093/bjps/axz010
  • Nguyen, James and Roman Frigg, forthcoming, “Mathematics Is Not the Only Language in the Book of Nature”, Synthese , first online: 28 August 2017. doi:10.1007/s11229-017-1526-5
  • Norton, John D., 1991, “Thought Experiments in Einstein’s Work”, in Horowitz and Massey 1991: 129–148.
  • –––, 2003, “Causation as Folk Science”, Philosopher’s Imprint , 3: article 4. [ Norton 2003 available online ]
  • –––, 2012, “Approximation and Idealization: Why the Difference Matters”, Philosophy of Science , 79(2): 207–232. doi:10.1086/664746
  • Nowak, Leszek, 1979, The Structure of Idealization: Towards a Systematic Interpretation of the Marxian Idea of Science , Dordrecht: D. Reidel.
  • Palacios, Patricia, 2019, “Phase Transitions: A Challenge for Intertheoretic Reduction?”, Philosophy of Science , 86(4): 612–640. doi:10.1086/704974
  • Peschard, Isabelle, 2011, “Making Sense of Modeling: Beyond Representation”, European Journal for Philosophy of Science , 1(3): 335–352. doi:10.1007/s13194-011-0032-8
  • Piccinini, Gualtiero and Carl Craver, 2011, “Integrating Psychology and Neuroscience: Functional Analyses as Mechanism Sketches”, Synthese , 183(3): 283–311. doi:10.1007/s11229-011-9898-4
  • Pincock, Christopher, 2012, Mathematics and Scientific Representation , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199757107.001.0001
  • –––, forthcoming, “Concrete Scale Models, Essential Idealization and Causal Explanation”, British Journal for the Philosophy of Science .
  • Portides, Demetris P., 2007, “The Relation between Idealisation and Approximation in Scientific Model Construction”, Science & Education , 16(7–8): 699–724. doi:10.1007/s11191-006-9001-6
  • –––, 2014, “How Scientific Models Differ from Works of Fiction”, in Lorenzo Magnani (ed.), Model-Based Reasoning in Science and Technology (Studies in Applied Philosophy, Epistemology and Rational Ethics 8), Berlin and Heidelberg: Springer, pp. 75–87. doi:10.1007/978-3-642-37428-9_5
  • Potochnik, Angela, 2007, “Optimality Modeling and Explanatory Generality”, Philosophy of Science , 74(5): 680–691.
  • –––, 2017, Idealization and the Aims of Science , Chicago, IL: University of Chicago Press.
  • Poznic, Michael, 2016, “Make-Believe and Model-Based Representation in Science: The Epistemology of Frigg’s and Toon’s Fictionalist Views of Modeling”, Teorema: Revista Internacional de Filosofía , 35(3): 201–218.
  • Psillos, Stathis, 1995, “The Cognitive Interplay between Theories and Models: The Case of 19th Century Optics”, in Herfel et al. 1995: 105–133.
  • Redhead, Michael, 1980, “Models in Physics”, The British Journal for the Philosophy of Science , 31(2): 145–163. doi:10.1093/bjps/31.2.145
  • Reiss, Julian, 2003, “Causal Inference in the Abstract or Seven Myths about Thought Experiments”, in Causality: Metaphysics and Methods Research Project , Technical Report 03/02. London: London School of Economics.
  • –––, 2006, “Social Capacities”, in Hartmann et al. 2006: 265–288.
  • –––, 2012, “The Explanation Paradox”, Journal of Economic Methodology , 19(1): 43–62. doi:10.1080/1350178X.2012.661069
  • Reutlinger, Alexander, 2017, “Do Renormalization Group Explanations Conform to the Commonality Strategy?”, Journal for General Philosophy of Science , 48(1): 143–150. doi:10.1007/s10838-016-9339-7
  • Reutlinger, Alexander, Dominik Hangleiter, and Stephan Hartmann, 2018, “Understanding (with) Toy Models”, The British Journal for the Philosophy of Science , 69(4): 1069–1099. doi:10.1093/bjps/axx005
  • Rice, Collin C., 2015, “Moving Beyond Causes: Optimality Models and Scientific Explanation”, Noûs , 49(3): 589–615. doi:10.1111/nous.12042
  • –––, 2016, “Factive Scientific Understanding without Accurate Representation”, Biology & Philosophy , 31(1): 81–102. doi:10.1007/s10539-015-9510-2
  • –––, 2018, “Idealized Models, Holistic Distortions, and Universality”, Synthese , 195(6): 2795–2819. doi:10.1007/s11229-017-1357-4
  • –––, 2019, “Models Don’t Decompose That Way: A Holistic View of Idealized Models”, The British Journal for the Philosophy of Science , 70(1): 179–208. doi:10.1093/bjps/axx045
  • Rosaler, Joshua, 2015, “Local Reduction in Physics”, Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics , 50: 54–69. doi:10.1016/j.shpsb.2015.02.004
  • Rueger, Alexander, 2005, “Perspectival Models and Theory Unification”, The British Journal for the Philosophy of Science , 56(3): 579–594. doi:10.1093/bjps/axi128
  • Rueger, Alexander and David Sharp, 1998, “Idealization and Stability: A Perspective from Nonlinear Dynamics”, in Shanks 1998: 201–216.
  • Saatsi, Juha, 2016, “Models, Idealisations, and Realism”, in Emiliano Ippoliti, Fabio Sterpetti, and Thomas Nickles (eds.), Models and Inferences in Science (Studies in Applied Philosophy, Epistemology and Rational Ethics 25), Cham: Springer International Publishing, pp. 173–189. doi:10.1007/978-3-319-28163-6_10
  • Saatsi, Juha and Alexander Reutlinger, 2018, “Taking Reductionism to the Limit: How to Rebut the Antireductionist Argument from Infinite Limits”, Philosophy of Science , 85(3): 455–482. doi:10.1086/697735
  • Salis, Fiora, forthcoming, “The New Fiction View of Models”, The British Journal for the Philosophy of Science , first online: 20 April 2019. doi:10.1093/bjps/axz015
  • Salmon, Wesley C., 1984, Scientific Explanation and the Causal Structure of the World , Princeton, NJ: Princeton University Press.
  • Schaffner, Kenneth F., 1969, “The Watson–Crick Model and Reductionism”, The British Journal for the Philosophy of Science , 20(4): 325–348. doi:10.1093/bjps/20.4.325
  • Scheibe, Erhard, 1997, Die Reduktion physikalischer Theorien: Ein Beitrag zur Einheit der Physik, Teil I: Grundlagen und elementare Theorie , Berlin: Springer.
  • –––, 1999, Die Reduktion physikalischer Theorien: Ein Beitrag zur Einheit der Physik, Teil II: Inkommensurabilität und Grenzfallreduktion , Berlin: Springer.
  • –––, 2001, Between Rationalism and Empiricism: Selected Papers in the Philosophy of Physics , Brigitte Falkenburg (ed.), New York: Springer. doi:10.1007/978-1-4613-0183-7
  • Shanks, Niall (ed.), 1998, Idealization in Contemporary Physics , Amsterdam: Rodopi.
  • Shech, Elay, 2018, “Idealizations, Essential Self-Adjointness, and Minimal Model Explanation in the Aharonov–Bohm Effect”, Synthese , 195(11): 4839–4863. doi:10.1007/s11229-017-1428-6
  • Sismondo, Sergio and Snait Gissis (eds.), 1999, Modeling and Simulation , Special Issue of Science in Context , 12(2).
  • Sorensen, Roy A., 1992, Thought Experiments , New York: Oxford University Press. doi:10.1093/019512913X.001.0001
  • Spector, Marshall, 1965, “Models and Theories”, The British Journal for the Philosophy of Science , 16(62): 121–142. doi:10.1093/bjps/XVI.62.121
  • Staley, Kent W., 2004, The Evidence for the Top Quark: Objectivity and Bias in Collaborative Experimentation , Cambridge: Cambridge University Press.
  • Sterrett, Susan G., 2006, “Models of Machines and Models of Phenomena”, International Studies in the Philosophy of Science , 20(1): 69–80. doi:10.1080/02698590600641024
  • –––, forthcoming, “Scale Modeling”, in Diane Michelfelder and Neelke Doorn (eds.), Routledge Handbook of Philosophy of Engineering , Chapter 32. [ Sterrett forthcoming available online ]
  • Strevens, Michael, 2004, “The Causal and Unification Approaches to Explanation Unified—Causally”, Noûs , 38(1): 154–176. doi:10.1111/j.1468-0068.2004.00466.x
  • –––, 2008, Depth: An Account of Scientific Explanation , Cambridge, MA, and London: Harvard University Press.
  • –––, 2013, Tychomancy: Inferring Probability from Causal Structure , Cambridge, MA, and London: Harvard University Press.
  • Suárez, Mauricio, 2003, “Scientific Representation: Against Similarity and Isomorphism”, International Studies in the Philosophy of Science , 17(3): 225–244. doi:10.1080/0269859032000169442
  • –––, 2004, “An Inferential Conception of Scientific Representation”, Philosophy of Science , 71(5): 767–779. doi:10.1086/421415
  • ––– (ed.), 2009, Fictions in Science: Philosophical Essays on Modeling and Idealization , London: Routledge. doi:10.4324/9780203890103
  • Sugden, Robert, 2000, “Credible Worlds: The Status of Theoretical Models in Economics”, Journal of Economic Methodology , 7(1): 1–31. doi:10.1080/135017800362220
  • Sullivan, Emily and Kareem Khalifa, 2019, “Idealizations and Understanding: Much Ado About Nothing?”, Australasian Journal of Philosophy , 97(4): 673–689. doi:10.1080/00048402.2018.1564337
  • Suppe, Frederick, 2000, “Theory Identity”, in William H. Newton-Smith (ed.), A Companion to the Philosophy of Science , Oxford: Wiley-Blackwell, pp. 525–527.
  • Suppes, Patrick, 1960, “A Comparison of the Meaning and Uses of Models in Mathematics and the Empirical Sciences”, Synthese , 12(2–3): 287–301. Reprinted in Freudenthal 1961: 163–177, and in Suppes 1969: 10–23. doi:10.1007/BF00485107 doi:10.1007/978-94-010-3667-2_16
  • –––, 1962, “Models of Data”, in Ernest Nagel, Patrick Suppes, and Alfred Tarski (eds.), Logic, Methodology and Philosophy of Science: Proceedings of the 1960 International Congress , Stanford, CA: Stanford University Press, pp. 252–261. Reprinted in Suppes 1969: 24–35.
  • –––, 1969, Studies in the Methodology and Foundations of Science: Selected Papers from 1951 to 1969 , Dordrecht: Reidel.
  • –––, 2007, “Statistical Concepts in Philosophy of Science”, Synthese , 154(3): 485–496. doi:10.1007/s11229-006-9122-0
  • Swoyer, Chris, 1991, “Structural Representation and Surrogative Reasoning”, Synthese , 87(3): 449–508. doi:10.1007/BF00499820
  • Tabor, Michael, 1989, Chaos and Integrability in Nonlinear Dynamics: An Introduction , New York: John Wiley.
  • Teller, Paul, 2001, “Twilight of the Perfect Model”, Erkenntnis , 55(3): 393–415. doi:10.1023/A:1013349314515
  • –––, 2002, “Critical Study: Nancy Cartwright’s The Dappled World: A Study of the Boundaries of Science ”, Noûs , 36(4): 699–725. doi:10.1111/1468-0068.t01-1-00408
  • –––, 2009, “Fictions, Fictionalization, and Truth in Science”, in Suárez 2009: 235–247.
  • –––, 2018, “Referential and Perspectival Realism”, Spontaneous Generations: A Journal for the History and Philosophy of Science , 9(1): 151–164. doi:10.4245/sponge.v9i1.26990
  • Tešić, Marko, 2019, “Confirmation and the Generalized Nagel–Schaffner Model of Reduction: A Bayesian Analysis”, Synthese , 196(3): 1097–1129. doi:10.1007/s11229-017-1501-1
  • Thomasson, Amie L., 1999, Fiction and Metaphysics , New York: Cambridge University Press. doi:10.1017/CBO9780511527463
  • –––, 2020, “If Models Were Fictions, Then What Would They Be?”, in Levy and Godfrey-Smith 2020: 51–74.
  • Thomson-Jones, Martin, 2006, “Models and the Semantic View”, Philosophy of Science , 73(5): 524–535. doi:10.1086/518322
  • –––, 2020, “Realism about Missing Systems”, in Levy and Godfrey-Smith 2020: 75–101.
  • Toon, Adam, 2012, Models as Make-Believe: Imagination, Fiction and Scientific Representation , Basingstoke: Palgrave Macmillan.
  • Trout, J. D., 2002, “Scientific Explanation and the Sense of Understanding”, Philosophy of Science , 69(2): 212–233. doi:10.1086/341050
  • van Fraassen, Bas C., 1989, Laws and Symmetry , Oxford: Oxford University Press. doi:10.1093/0198248601.001.0001
  • Walton, Kendall L., 1990, Mimesis as Make-Believe: On the Foundations of the Representational Arts , Cambridge, MA: Harvard University Press.
  • Weisberg, Michael, 2007, “Three Kinds of Idealization”, Journal of Philosophy , 104(12): 639–659. doi:10.5840/jphil20071041240
  • –––, 2013, Simulation and Similarity: Using Models to Understand the World , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199933662.001.0001
  • Weisberg, Michael and Ryan Muldoon, 2009, “Epistemic Landscapes and the Division of Cognitive Labor”, Philosophy of Science , 76(2): 225–252. doi:10.1086/644786
  • Wimsatt, William, 1987, “False Models as Means to Truer Theories”, in Matthew Nitecki and Antoni Hoffman (eds.), Neutral Models in Biology , Oxford: Oxford University Press, pp. 23–55.
  • –––, 2007, Re-Engineering Philosophy for Limited Beings: Piecewise Approximations to Reality , Cambridge, MA: Harvard University Press.
  • Woodward, James, 2003, Making Things Happen: A Theory of Causal Explanation , Oxford: Oxford University Press. doi:10.1093/0195155270.001.0001
  • Woody, Andrea I., 2004, “More Telltale Signs: What Attention to Representation Reveals about Scientific Explanation”, Philosophy of Science , 71(5): 780–793. doi:10.1086/421416
  • Zollman, Kevin J. S., 2007, “The Communication Structure of Epistemic Communities”, Philosophy of Science , 74(5): 574–587. doi:10.1086/525605
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
  • Internet Encyclopedia of Philosophy article on models
  • Bibliography (1450–2008), Mueller Science
  • Interactive models from various sciences (Phet, University of Colorado, Boulder)
  • Models of the global climate (Climate.gov)
  • Double-helix model of DNA (Proteopedia)
  • A Biologist’s Guide to Mathematical Modeling in Ecology and Evolution (Sarah Otto and Troy Day)
  • Lotka–Volterra model (analyticphysics.com)
  • Schelling’s Model of Segregation (Frank McCown)
  • Modeling Commons (NetLogo)
  • Social and Economic Networks: Models and Analysis (Stanford Online course)
  • Neural Network Models (TensorFlow)

analogy and analogical reasoning | laws of nature | science: unity of | scientific explanation | scientific realism | scientific representation | scientific theories: structure of | simulations in science | thought experiments

Acknowledgments

We would like to thank Joe Dewhurst, James Nguyen, Alexander Reutlinger, Collin Rice, Dunja Šešelja, and Paul Teller for helpful comments on the drafts of the revised version in 2019. When writing the original version back in 2006 we benefitted from comments and suggestions by Nancy Cartwright, Paul Humphreys, Julian Reiss, Elliott Sober, Chris Swoyer, and Paul Teller.

Copyright © 2020 by Roman Frigg < r . p . frigg @ lse . ac . uk > Stephan Hartmann < stephan . hartmann @ lrz . uni-muenchen . de >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2024 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

Design of Experiments

Introductory basics, introduction to design of experiments.

The Open Educator Textbook Explanation with Examples and Video Demonstrations

Video Demonstration Only, Click on the Topic Below

What is Design of Experiments DOE?

Hypothesis Testing Basic

Explanation of Factor, Response, dependent, independent, variable

Levels of a Factor

Fixed Factor, Random Factor, and Block

Descriptive Statistics and Inferential Statistics

What is Analysis of Variance ANOVA & Why

p-value & Level of Significance

Errors in Statistical Tests Type 1, Type II, Type III

Hypothesis Testing

How to Choose an Appropriate Statistical Method/Test for Your Design of Experiments or Data Analysis

Single Sample Z Test Application, Data Collection, Analysis, Results Explained in MS Excel & Minitab

Single Sample T Test Application, Data Collection, Analysis, Results Explained in MS Excel & Minitab

Single Proportion Test Application, Data Collection, Analysis, Results Explained MS Excel & Minitab

Two-Sample Z Test Application, Data Collection, Analysis, Results Explained Using MS Excel & Minitab

Two Sample T Test Application, Data Collection, Analysis, Results Explained Using MS Excel & Minitab

Paired T Test Application, Data Collection, Analysis, Results Explained Using MS Excel & Minitab

Two Sample/Population Proportion Test Application, Analysis & Result Explained in MS Excel & Minitab

Completely Randomized Design (CRD)

One-way/single factor analysis of variance, anova.

One Way Single Factor Analysis of Variance ANOVA Completely Randomized Design Analysis in MS Excel

One Way Single Factor Analysis of Variance ANOVA Completely Randomized Design Analysis in Minitab

One Way Single Factor Analysis of Variance ANOVA Post Hoc Pairwise Comparison Analysis in MS Excel

Fixed vs Random Effect Model Explained with Examples Using Excel and Minitab

Randomized Complete Block Design

Randomized Complete Block Design of Experiments RCBD Using Minitab 2020

Latin Square Design Using Minitab Updated 2020

Graeco Latin Square Design Updated 2020

Latin Square and Graeco Latin Square Design

Latin Square and Graeco Latin Square Design Analysis using Minitab

Screening the Important Factors/Variables

Factorial design of experiments.

Introduction to Factorial Design and the Main Effect Calculation

C alculate Two Factors Interaction Effect

Regression using the Calculated Effects

Basic Response Surface Methodology RSM Factorial Design

Construct ANOVA Table from the Effect Estimates

2k Factorial Design of Experiments

The Open Educator Textbook Explanation with Examples and Video Demonstrations for All Topics

Introduction to 2K Factorial Design

Contrast, Effect, Sum of Square, Estimate Formula, ANOVA table

Design Layout and Construction of 2K Factorial Design Using MS Excel

Write Treatment Combinations Systematically and Flawlessly

Contrast, Effect, Estimate, and Sum of Square Calculation Using MS Excel

Comparisons between MS Excel, Minitab, SPSS, and SAS in Design and Analysis of Experiments

Blocking and Confounding in 2k Design

Introduction to Blocking and Confounding

Confounding in Factorial and Fractional Factorial

Blocking and Confounding Using -1/+1 Coding System

Blocking and Confounding Using Linear Combination Method

Multiple Blocking and Confounding, How To

Complete vs Partial Confounding and The Appropriate Use of Them

How Many Confounded Treatments are There in a Multiple Confounded Effects

How to Confound Three or More Effects in Eight or More Blocks

Fractional Factorial Design

What is Fractional Factorial Design of Experiments

The One-Half Fraction Explained in 2K Fractional Factorial Design

Introduction to the Primary Basics of the Fractional Factorial Design

Design Resolution Explained

One-Half Fractional Factorial 2k Design Details Explained

How to Design a One-Half Fractional Factorial 2k Design using MS Excel

One-Quarter Fractional Factorial 2k Design

Design a One-Quarter Fractional Factorial 2k Design Using MS Excel

Calculate and Write All Effects in 2k Factorial Design Systematic Flawless

Write Alias Structure in 2K Fractional Factorial Design

Write Alias Structure in 2K Six Factor Quarter Fraction Factorial Design

Design a One-Eighth Fractional Factorial 2k Design Using MS Excel

2K Alias Structure Solution an Example Solution

Fractional Factorial Data Analysis Example Minitab ( Fractional Factorial DOE Data Analysis Example Document )

Design any Fractional Factorial Design with the Lowest Number of Possible Runs Easiest Method in MS Excel

The Easiest Way to Randomize an Experiment in using MS Excel

Plackett-Burman Fractional Factorial Design Using MS Excel

Plackett Burman Fractional Factorial Design of Experiments DOE Using Minitab

Optimize the Important Factors/Variables

Applied regression analysis.

Simple Linear Regression Analysis Using MS Excel and Minitab

Simple Linear Regression Analysis Real Life Example 1

Simple Linear Regression Analysis Real Life Example 2

Simple Linear Regression Analysis Example Cost Estimation

Linear Regression Diagnostics Analysis

Response Surface Methodology

What is Response Surface Methodology RSM and How to Learn it?

Basic Response Surface Methodology RSM Design and Analysis Minitab

Response Surface Basic Central Composite Design

Response Surface Central Composite Design in MS Excel

Response Surface Design Layout Construction Minitab MS Excel

Response Surface Design Analysis Example Minitab

Multiple Response Optimization in Response Surface Methodology RSM

Box Behnken Response Surface Methodology RSM Design and Analysis Explained Example using Minitab

Is Box Behnken Better than the Central Composite Design in the Response Surface Methodology?

Advanced Complex Mixed Factors

Expected mean square, basics to complex models.

Expected Mean Square All Fixed Factors

Expected Mean Square Random Effect Model

Restricted vs Unrestricted Mixed Model Design of Experiments with Fixed and Random Factors

How to Systematically Develop Expected Mean Square Fixed and Random Mixed Effect Model

How to Systematically Develop Expected Mean Square Random, Nested, and Fixed Mixed Effect Model

Restricted vs Unrestricted Mixed Models, How to Choose the Appropriate Model

Nested, & Repeated Measure, Split-Plot Design

Nested Design

Repeated Measure Design

Split Plot Design

Difference between Nested, Split Plot and Repeated Measure Design

Minitab Analysis Nested, Split Plot, and Repeated Measure Design

Analysis & Results Explained for Advanced DOE Partly Nested, Split-Plot, Mixed Fixed Random Models

Approximate F test | Pseudo F Test for Advanced Mixed Models nested, split plot, repeated measure

Taguchi Robust Parameter Design

Files used in the video.

Data Used in the Video for Robust Parameter Taguchi Design

How to Construct Taguchi Orthogonal Arrays Bose Design Generator

How to Construct Taguchi Orthogonal Arrays Plackett-Burman Design Generator

Taguchi Linear Graphs Possible Interactions

Taguchi Interaction Table Development How to

Video Demonstrations

Robust parameter Taguchi Design Terms Explained

Introduction To Robust Parameter Taguchi Design of Experiments Analysis Steps Explained

Robust Parameter Taguchi Design Signal to Noise Ratio Calculation in MS Excel

Robust Parameter Taguchi Design Example in MS Excel

Robust Parameter Taguchi Design Example in Minitab

How to Construct Taguchi Orthogonal Array L8(2^7) in MS Excel

How to Construct Taguchi Orthogonal Array L9(3^4) in MS Excel

How to Construct Taguchi Orthogonal Array L16(4^5) in MS Excel ( MS Excel file for the Design )

How to Construct Taguchi Orthogonal Array L16(2^15) in MS Excel

How to Construct Taguchi Orthogonal Array L32(2^31) in MS Excel

Construct Any (Taguchi) Orthogonal Arrays upto L36(2^35) in MS Excel

Taguchi Linear Graphs Explained and How to Use Them

Taguchi Triangular Interactions Table Explained and How to Use them in the Design of Experiments

Taguchi Interaction Table Construction Design of Experiments How to

Taguchi Linear Graphs, Interactions Table, Design Resolution, Alias Structure, & Fractional Factorial Design of Experiments

How to Create Robust Parameter Taguchi Design in Minitab

How to perform Robust Parameter Taguchi Static Analysis in Minitab

How to perform Robust Parameter Taguchi Dynamic Analysis in Minitab

How to perform Robust Parameter Taguchi Dynamic Analysis in MS Excel

Robust Parameter Taguchi Dynamic Analysis Regress Method in MS Excel and Minitab

Recommended Texts

General design of experiments.

[The order is based on the Use of the Book]

Hinkelmann, K., & Kempthorne, O. (2007). Design and Analysis of Experiments, Introduction to Experimental Design (Volume 1) . John Wiley & Sons. ISBN-13: 978-0471727569; ISBN-10: 0471727563.

Hinkelmann, K., & Kempthorne, O. (2005). Design and Analysis of Experiments, Advanced Experimental Design (Volume 2) . John Wiley & Sons. ISBN-13: 978-0471551775; ISBN-10: 0471551775.

Montgomery, D. C. (2012). Design and analysis of experiments 8 th /E. John Wiley & Sons. ISBN-13: 978-1118146927; ISBN-10: 1118146921

Box, G. E., J. S. Hunter, et al. (2005). Statistics for experimenters: design, discovery and innovation, Wiley-Interscience.

Kempthorne, O. (1952). The design and analysis of experiments, John Wiley & Sons Inc.

Fisher, R. A., Bennett, J. H., Fisher, R. A., & Bennett, J. H. (1990). Statistical methods, experimental design, and scientific inference. Oxford University Press. ISBN-10: 0198522290; ISBN-13: 978-0198522294.

Regression & Response Surface

Kutner, M. H., Nachtsheim, C. J., Neter, J., & Li, W. (2013). Applied linear statistical models .

Myers, R. H., Montgomery, D. C., & Anderson-Cook, C. M. (2019). Response surface methodology: Process and product optimization using designed experiments . Hoboken: Wiley.

Robust Parameter Optimization

Taguchi design of experiments.

Kacker, R. N., Lagergren, E. S., & Filliben, J. J. (1991). Taguchi’s orthogonal arrays are classical designs of experiments. Journal of research of the National Institute of Standards and Technology , 96 (5), 577.

Plackett, R. L., & Burman, J. P. (1946). The design of optimum multifactorial experiments. Biometrika , 305-325. (for Video #11)

Taguchi, G., Chowdhury, S., Wu, Y., Taguchi, S., & Yano, H. (2011). Taguchi's quality engineering handbook. Hoboken, N.J: John Wiley & Sons.

Chowdhury, S., & Taguchi, S. (2016). Robust Optimization: World's Best Practices for Developing Winning Vehicles. John Wiley & Sons.

Random-Effect Models, Mixed Models, Nested, Split-Plot & Repeated Measure Design of Experiments

Quinn, G. P., & Keough, M. J. (2014). Experimental design and data analysis for biologists . Cambridge: Cambridge Univ. Press.

What Is an Experiment? Definition and Design

The Basics of an Experiment

  • Chemical Laws
  • Periodic Table
  • Projects & Experiments
  • Scientific Method
  • Biochemistry
  • Physical Chemistry
  • Medical Chemistry
  • Chemistry In Everyday Life
  • Famous Chemists
  • Activities for Kids
  • Abbreviations & Acronyms
  • Weather & Climate
  • Ph.D., Biomedical Sciences, University of Tennessee at Knoxville
  • B.A., Physics and Mathematics, Hastings College

Science is concerned with experiments and experimentation, but do you know what exactly an experiment is? Here's a look at what an experiment is... and isn't!

Key Takeaways: Experiments

  • An experiment is a procedure designed to test a hypothesis as part of the scientific method.
  • The two key variables in any experiment are the independent and dependent variables. The independent variable is controlled or changed to test its effects on the dependent variable.
  • Three key types of experiments are controlled experiments, field experiments, and natural experiments.

What Is an Experiment? The Short Answer

In its simplest form, an experiment is simply the test of a hypothesis . A hypothesis, in turn, is a proposed relationship or explanation of phenomena.

Experiment Basics

The experiment is the foundation of the scientific method , which is a systematic means of exploring the world around you. Although some experiments take place in laboratories, you could perform an experiment anywhere, at any time.

Take a look at the steps of the scientific method:

  • Make observations.
  • Formulate a hypothesis.
  • Design and conduct an experiment to test the hypothesis.
  • Evaluate the results of the experiment.
  • Accept or reject the hypothesis.
  • If necessary, make and test a new hypothesis.

Types of Experiments

  • Natural Experiments : A natural experiment also is called a quasi-experiment. A natural experiment involves making a prediction or forming a hypothesis and then gathering data by observing a system. The variables are not controlled in a natural experiment.
  • Controlled Experiments : Lab experiments are controlled experiments , although you can perform a controlled experiment outside of a lab setting! In a controlled experiment, you compare an experimental group with a control group. Ideally, these two groups are identical except for one variable , the independent variable .
  • Field Experiments : A field experiment may be either a natural experiment or a controlled experiment. It takes place in a real-world setting, rather than under lab conditions. For example, an experiment involving an animal in its natural habitat would be a field experiment.

Variables in an Experiment

Simply put, a variable is anything you can change or control in an experiment. Common examples of variables include temperature, duration of the experiment, composition of a material, amount of light, etc. There are three kinds of variables in an experiment: controlled variables, independent variables and dependent variables .

Controlled variables , sometimes called constant variables are variables that are kept constant or unchanging. For example, if you are doing an experiment measuring the fizz released from different types of soda, you might control the size of the container so that all brands of soda would be in 12-oz cans. If you are performing an experiment on the effect of spraying plants with different chemicals, you would try to maintain the same pressure and maybe the same volume when spraying your plants.

The independent variable is the one factor that you are changing. It is one factor because usually in an experiment you try to change one thing at a time. This makes measurements and interpretation of the data much easier. If you are trying to determine whether heating water allows you to dissolve more sugar in the water then your independent variable is the temperature of the water. This is the variable you are purposely controlling.

The dependent variable is the variable you observe, to see whether it is affected by your independent variable. In the example where you are heating water to see if this affects the amount of sugar you can dissolve , the mass or volume of sugar (whichever you choose to measure) would be your dependent variable.

Examples of Things That Are Not Experiments

  • Making a model volcano.
  • Making a poster.
  • Changing a lot of factors at once, so you can't truly test the effect of the dependent variable.
  • Trying something, just to see what happens. On the other hand, making observations or trying something, after making a prediction about what you expect will happen, is a type of experiment.
  • Bailey, R.A. (2008). Design of Comparative Experiments . Cambridge: Cambridge University Press. ISBN 9780521683579.
  • Beveridge, William I. B., The Art of Scientific Investigation . Heinemann, Melbourne, Australia, 1950.
  • di Francia, G. Toraldo (1981). The Investigation of the Physical World . Cambridge University Press. ISBN 0-521-29925-X.
  • Hinkelmann, Klaus and Kempthorne, Oscar (2008). Design and Analysis of Experiments, Volume I: Introduction to Experimental Design (Second ed.). Wiley. ISBN 978-0-471-72756-9.
  • Shadish, William R.; Cook, Thomas D.; Campbell, Donald T. (2002). Experimental and quasi-experimental designs for generalized causal inference (Nachdr. ed.). Boston: Houghton Mifflin. ISBN 0-395-61556-9.
  • 10 Things You Need To Know About Chemistry
  • Chemistry 101 - Introduction & Index of Topics
  • How to Clean Lab Glassware
  • How To Design a Science Fair Experiment
  • Understanding Experimental Groups
  • What Is a Control Group?
  • Examples of Independent and Dependent Variables
  • How to Write a Lab Report
  • The Difference Between Control Group and Experimental Group
  • Scientific Method Lesson Plan
  • Pre-Lab Prep for Chemistry Lab
  • Difference Between Independent and Dependent Variables
  • Which Is Faster: Melting Ice in Water or Air?
  • What Is the Difference Between Hard and Soft Science?
  • 5 Top Reasons Why Students Fail Chemistry
  • What Is a Dependent Variable?

Statistical Design and Analysis of Biological Experiments

Chapter 1 principles of experimental design, 1.1 introduction.

The validity of conclusions drawn from a statistical analysis crucially hinges on the manner in which the data are acquired, and even the most sophisticated analysis will not rescue a flawed experiment. Planning an experiment and thinking about the details of data acquisition is so important for a successful analysis that R. A. Fisher—who single-handedly invented many of the experimental design techniques we are about to discuss—famously wrote

To call in the statistician after the experiment is done may be no more than asking him to perform a post-mortem examination: he may be able to say what the experiment died of. ( Fisher 1938 )

(Statistical) design of experiments provides the principles and methods for planning experiments and tailoring the data acquisition to an intended analysis. Design and analysis of an experiment are best considered as two aspects of the same enterprise: the goals of the analysis strongly inform an appropriate design, and the implemented design determines the possible analyses.

The primary aim of designing experiments is to ensure that valid statistical and scientific conclusions can be drawn that withstand the scrutiny of a determined skeptic. Good experimental design also considers that resources are used efficiently, and that estimates are sufficiently precise and hypothesis tests adequately powered. It protects our conclusions by excluding alternative interpretations or rendering them implausible. Three main pillars of experimental design are randomization , replication , and blocking , and we will flesh out their effects on the subsequent analysis as well as their implementation in an experimental design.

An experimental design is always tailored towards predefined (primary) analyses and an efficient analysis and unambiguous interpretation of the experimental data is often straightforward from a good design. This does not prevent us from doing additional analyses of interesting observations after the data are acquired, but these analyses can be subjected to more severe criticisms and conclusions are more tentative.

In this chapter, we provide the wider context for using experiments in a larger research enterprise and informally introduce the main statistical ideas of experimental design. We use a comparison of two samples as our main example to study how design choices affect an analysis, but postpone a formal quantitative analysis to the next chapters.

1.2 A Cautionary Tale

For illustrating some of the issues arising in the interplay of experimental design and analysis, we consider a simple example. We are interested in comparing the enzyme levels measured in processed blood samples from laboratory mice, when the sample processing is done either with a kit from a vendor A, or a kit from a competitor B. For this, we take 20 mice and randomly select 10 of them for sample preparation with kit A, while the blood samples of the remaining 10 mice are prepared with kit B. The experiment is illustrated in Figure 1.1 A and the resulting data are given in Table 1.1 .

Table 1.1: Measured enzyme levels from samples of twenty mice. Samples of ten mice each were processed using a kit of vendor A and B, respectively.
A 8.96 8.95 11.37 12.63 11.38 8.36 6.87 12.35 10.32 11.99
B 12.68 11.37 12.00 9.81 10.35 11.76 9.01 10.83 8.76 9.99

One option for comparing the two kits is to look at the difference in average enzyme levels, and we find an average level of 10.32 for vendor A and 10.66 for vendor B. We would like to interpret their difference of -0.34 as the difference due to the two preparation kits and conclude whether the two kits give equal results or if measurements based on one kit are systematically different from those based on the other kit.

Such interpretation, however, is only valid if the two groups of mice and their measurements are identical in all aspects except the sample preparation kit. If we use one strain of mice for kit A and another strain for kit B, any difference might also be attributed to inherent differences between the strains. Similarly, if the measurements using kit B were conducted much later than those using kit A, any observed difference might be attributed to changes in, e.g., mice selected, batches of chemicals used, device calibration, or any number of other influences. None of these competing explanations for an observed difference can be excluded from the given data alone, but good experimental design allows us to render them (almost) arbitrarily implausible.

A second aspect for our analysis is the inherent uncertainty in our calculated difference: if we repeat the experiment, the observed difference will change each time, and this will be more pronounced for a smaller number of mice, among others. If we do not use a sufficient number of mice in our experiment, the uncertainty associated with the observed difference might be too large, such that random fluctuations become a plausible explanation for the observed difference. Systematic differences between the two kits, of practically relevant magnitude in either direction, might then be compatible with the data, and we can draw no reliable conclusions from our experiment.

In each case, the statistical analysis—no matter how clever—was doomed before the experiment was even started, while simple ideas from statistical design of experiments would have provided correct and robust results with interpretable conclusions.

1.3 The Language of Experimental Design

By an experiment we understand an investigation where the researcher has full control over selecting and altering the experimental conditions of interest, and we only consider investigations of this type. The selected experimental conditions are called treatments . An experiment is comparative if the responses to several treatments are to be compared or contrasted. The experimental units are the smallest subdivision of the experimental material to which a treatment can be assigned. All experimental units given the same treatment constitute a treatment group . Especially in biology, we often compare treatments to a control group to which some standard experimental conditions are applied; a typical example is using a placebo for the control group, and different drugs for the other treatment groups.

The values observed are called responses and are measured on the response units ; these are often identical to the experimental units but need not be. Multiple experimental units are sometimes combined into groupings or blocks , such as mice grouped by litter, or samples grouped by batches of chemicals used for their preparation. More generally, we call any grouping of the experimental material (even with group size one) a unit .

In our example, we selected the mice, used a single sample per mouse, deliberately chose the two specific vendors, and had full control over which kit to assign to which mouse. In other words, the two kits are the treatments and the mice are the experimental units. We took the measured enzyme level of a single sample from a mouse as our response, and samples are therefore the response units. The resulting experiment is comparative, because we contrast the enzyme levels between the two treatment groups.

Three designs to determine the difference between two preparation kits A and B based on four mice. A: One sample per mouse. Comparison between averages of samples with same kit. B: Two samples per mouse treated with the same kit. Comparison between averages of mice with same kit requires averaging responses for each mouse first. C: Two samples per mouse each treated with different kit. Comparison between two samples of each mouse, with differences averaged.

Figure 1.1: Three designs to determine the difference between two preparation kits A and B based on four mice. A: One sample per mouse. Comparison between averages of samples with same kit. B: Two samples per mouse treated with the same kit. Comparison between averages of mice with same kit requires averaging responses for each mouse first. C: Two samples per mouse each treated with different kit. Comparison between two samples of each mouse, with differences averaged.

In this example, we can coalesce experimental and response units, because we have a single response per mouse and cannot distinguish a sample from a mouse in the analysis, as illustrated in Figure 1.1 A for four mice. Responses from mice with the same kit are averaged, and the kit difference is the difference between these two averages.

By contrast, if we take two samples per mouse and use the same kit for both samples, then the mice are still the experimental units, but each mouse now groups the two response units associated with it. Now, responses from the same mouse are first averaged, and these averages are used to calculate the difference between kits; even though eight measurements are available, this difference is still based on only four mice (Figure 1.1 B).

If we take two samples per mouse, but apply each kit to one of the two samples, then the samples are both the experimental and response units, while the mice are blocks that group the samples. Now, we calculate the difference between kits for each mouse, and then average these differences (Figure 1.1 C).

If we only use one kit and determine the average enzyme level, then this investigation is still an experiment, but is not comparative.

To summarize, the design of an experiment determines the logical structure of the experiment ; it consists of (i) a set of treatments (the two kits); (ii) a specification of the experimental units (animals, cell lines, samples) (the mice in Figure 1.1 A,B and the samples in Figure 1.1 C); (iii) a procedure for assigning treatments to units; and (iv) a specification of the response units and the quantity to be measured as a response (the samples and associated enzyme levels).

1.4 Experiment Validity

Before we embark on the more technical aspects of experimental design, we discuss three components for evaluating an experiment’s validity: construct validity , internal validity , and external validity . These criteria are well-established in areas such as educational and psychological research, and have more recently been discussed for animal research ( Würbel 2017 ) where experiments are increasingly scrutinized for their scientific rationale and their design and intended analyses.

1.4.1 Construct Validity

Construct validity concerns the choice of the experimental system for answering our research question. Is the system even capable of providing a relevant answer to the question?

Studying the mechanisms of a particular disease, for example, might require careful choice of an appropriate animal model that shows a disease phenotype and is accessible to experimental interventions. If the animal model is a proxy for drug development for humans, biological mechanisms must be sufficiently similar between animal and human physiologies.

Another important aspect of the construct is the quantity that we intend to measure (the measurand ), and its relation to the quantity or property we are interested in. For example, we might measure the concentration of the same chemical compound once in a blood sample and once in a highly purified sample, and these constitute two different measurands, whose values might not be comparable. Often, the quantity of interest (e.g., liver function) is not directly measurable (or even quantifiable) and we measure a biomarker instead. For example, pre-clinical and clinical investigations may use concentrations of proteins or counts of specific cell types from blood samples, such as the CD4+ cell count used as a biomarker for immune system function.

1.4.2 Internal Validity

The internal validity of an experiment concerns the soundness of the scientific rationale, statistical properties such as precision of estimates, and the measures taken against risk of bias. It refers to the validity of claims within the context of the experiment. Statistical design of experiments plays a prominent role in ensuring internal validity, and we briefly discuss the main ideas before providing the technical details and an application to our example in the subsequent sections.

Scientific Rationale and Research Question

The scientific rationale of a study is (usually) not immediately a statistical question. Translating a scientific question into a quantitative comparison amenable to statistical analysis is no small task and often requires careful consideration. It is a substantial, if non-statistical, benefit of using experimental design that we are forced to formulate a precise-enough research question and decide on the main analyses required for answering it before we conduct the experiment. For example, the question: is there a difference between placebo and drug? is insufficiently precise for planning a statistical analysis and determine an adequate experimental design. What exactly is the drug treatment? What should the drug’s concentration be and how is it administered? How do we make sure that the placebo group is comparable to the drug group in all other aspects? What do we measure and what do we mean by “difference?” A shift in average response, a fold-change, change in response before and after treatment?

The scientific rationale also enters the choice of a potential control group to which we compare responses. The quote

The deep, fundamental question in statistical analysis is ‘Compared to what?’ ( Tufte 1997 )

highlights the importance of this choice.

There are almost never enough resources to answer all relevant scientific questions. We therefore define a few questions of highest interest, and the main purpose of the experiment is answering these questions in the primary analysis . This intended analysis drives the experimental design to ensure relevant estimates can be calculated and have sufficient precision, and tests are adequately powered. This does not preclude us from conducting additional secondary analyses and exploratory analyses , but we are not willing to enlarge the experiment to ensure that strong conclusions can also be drawn from these analyses.

Risk of Bias

Experimental bias is a systematic difference in response between experimental units in addition to the difference caused by the treatments. The experimental units in the different groups are then not equal in all aspects other than the treatment applied to them. We saw several examples in Section 1.2 .

Minimizing the risk of bias is crucial for internal validity and we look at some common measures to eliminate or reduce different types of bias in Section 1.5 .

Precision and Effect Size

Another aspect of internal validity is the precision of estimates and the expected effect sizes. Is the experimental setup, in principle, able to detect a difference of relevant magnitude? Experimental design offers several methods for answering this question based on the expected heterogeneity of samples, the measurement error, and other sources of variation: power analysis is a technique for determining the number of samples required to reliably detect a relevant effect size and provide estimates of sufficient precision. More samples yield more precision and more power, but we have to be careful that replication is done at the right level: simply measuring a biological sample multiple times as in Figure 1.1 B yields more measured values, but is pseudo-replication for analyses. Replication should also ensure that the statistical uncertainties of estimates can be gauged from the data of the experiment itself, without additional untestable assumptions. Finally, the technique of blocking , shown in Figure 1.1 C, can remove a substantial proportion of the variation and thereby increase power and precision if we find a way to apply it.

1.4.3 External Validity

The external validity of an experiment concerns its replicability and the generalizability of inferences. An experiment is replicable if its results can be confirmed by an independent new experiment, preferably by a different lab and researcher. Experimental conditions in the replicate experiment usually differ from the original experiment, which provides evidence that the observed effects are robust to such changes. A much weaker condition on an experiment is reproducibility , the property that an independent researcher draws equivalent conclusions based on the data from this particular experiment, using the same analysis techniques. Reproducibility requires publishing the raw data, details on the experimental protocol, and a description of the statistical analyses, preferably with accompanying source code. Many scientific journals subscribe to reporting guidelines to ensure reproducibility and these are also helpful for planning an experiment.

A main threat to replicability and generalizability are too tightly controlled experimental conditions, when inferences only hold for a specific lab under the very specific conditions of the original experiment. Introducing systematic heterogeneity and using multi-center studies effectively broadens the experimental conditions and therefore the inferences for which internal validity is available.

For systematic heterogeneity , experimental conditions are systematically altered in addition to the treatments, and treatment differences estimated for each condition. For example, we might split the experimental material into several batches and use a different day of analysis, sample preparation, batch of buffer, measurement device, and lab technician for each batch. A more general inference is then possible if effect size, effect direction, and precision are comparable between the batches, indicating that the treatment differences are stable over the different conditions.

In multi-center experiments , the same experiment is conducted in several different labs and the results compared and merged. Multi-center approaches are very common in clinical trials and often necessary to reach the required number of patient enrollments.

Generalizability of randomized controlled trials in medicine and animal studies can suffer from overly restrictive eligibility criteria. In clinical trials, patients are often included or excluded based on co-medications and co-morbidities, and the resulting sample of eligible patients might no longer be representative of the patient population. For example, Travers et al. ( 2007 ) used the eligibility criteria of 17 random controlled trials of asthma treatments and found that out of 749 patients, only a median of 6% (45 patients) would be eligible for an asthma-related randomized controlled trial. This puts a question mark on the relevance of the trials’ findings for asthma patients in general.

1.5 Reducing the Risk of Bias

1.5.1 randomization of treatment allocation.

If systematic differences other than the treatment exist between our treatment groups, then the effect of the treatment is confounded with these other differences and our estimates of treatment effects might be biased.

We remove such unwanted systematic differences from our treatment comparisons by randomizing the allocation of treatments to experimental units. In a completely randomized design , each experimental unit has the same chance of being subjected to any of the treatments, and any differences between the experimental units other than the treatments are distributed over the treatment groups. Importantly, randomization is the only method that also protects our experiment against unknown sources of bias: we do not need to know all or even any of the potential differences and yet their impact is eliminated from the treatment comparisons by random treatment allocation.

Randomization has two effects: (i) differences unrelated to treatment become part of the ‘statistical noise’ rendering the treatment groups more similar; and (ii) the systematic differences are thereby eliminated as sources of bias from the treatment comparison.

Randomization transforms systematic variation into random variation.

In our example, a proper randomization would select 10 out of our 20 mice fully at random, such that the probability of any one mouse being picked is 1/20. These ten mice are then assigned to kit A, and the remaining mice to kit B. This allocation is entirely independent of the treatments and of any properties of the mice.

To ensure random treatment allocation, some kind of random process needs to be employed. This can be as simple as shuffling a pack of 10 red and 10 black cards or using a software-based random number generator. Randomization is slightly more difficult if the number of experimental units is not known at the start of the experiment, such as when patients are recruited for an ongoing clinical trial (sometimes called rolling recruitment ), and we want to have reasonable balance between the treatment groups at each stage of the trial.

Seemingly random assignments “by hand” are usually no less complicated than fully random assignments, but are always inferior. If surprising results ensue from the experiment, such assignments are subject to unanswerable criticism and suspicion of unwanted bias. Even worse are systematic allocations; they can only remove bias from known causes, and immediately raise red flags under the slightest scrutiny.

The Problem of Undesired Assignments

Even with a fully random treatment allocation procedure, we might end up with an undesirable allocation. For our example, the treatment group of kit A might—just by chance—contain mice that are all bigger or more active than those in the other treatment group. Statistical orthodoxy recommends using the design nevertheless, because only full randomization guarantees valid estimates of residual variance and unbiased estimates of effects. This argument, however, concerns the long-run properties of the procedure and seems of little help in this specific situation. Why should we care if the randomization yields correct estimates under replication of the experiment, if the particular experiment is jeopardized?

Another solution is to create a list of all possible allocations that we would accept and randomly choose one of these allocations for our experiment. The analysis should then reflect this restriction in the possible randomizations, which often renders this approach difficult to implement.

The most pragmatic method is to reject highly undesirable designs and compute a new randomization ( Cox 1958 ) . Undesirable allocations are unlikely to arise for large sample sizes, and we might accept a small bias in estimation for small sample sizes, when uncertainty in the estimated treatment effect is already high. In this approach, whenever we reject a particular outcome, we must also be willing to reject the outcome if we permute the treatment level labels. If we reject eight big and two small mice for kit A, then we must also reject two big and eight small mice. We must also be transparent and report a rejected allocation, so that critics may come to their own conclusions about potential biases and their remedies.

1.5.2 Blinding

Bias in treatment comparisons is also introduced if treatment allocation is random, but responses cannot be measured entirely objectively, or if knowledge of the assigned treatment affects the response. In clinical trials, for example, patients might react differently when they know to be on a placebo treatment, an effect known as cognitive bias . In animal experiments, caretakers might report more abnormal behavior for animals on a more severe treatment. Cognitive bias can be eliminated by concealing the treatment allocation from technicians or participants of a clinical trial, a technique called single-blinding .

If response measures are partially based on professional judgement (such as a clinical scale), patient or physician might unconsciously report lower scores for a placebo treatment, a phenomenon known as observer bias . Its removal requires double blinding , where treatment allocations are additionally concealed from the experimentalist.

Blinding requires randomized treatment allocation to begin with and substantial effort might be needed to implement it. Drug companies, for example, have to go to great lengths to ensure that a placebo looks, tastes, and feels similar enough to the actual drug. Additionally, blinding is often done by coding the treatment conditions and samples, and effect sizes and statistical significance are calculated before the code is revealed.

In clinical trials, double-blinding creates a conflict of interest. The attending physicians do not know which patient received which treatment, and thus accumulation of side-effects cannot be linked to any treatment. For this reason, clinical trials have a data monitoring committee not involved in the final analysis, that performs intermediate analyses of efficacy and safety at predefined intervals. If severe problems are detected, the committee might recommend altering or aborting the trial. The same might happen if one treatment already shows overwhelming evidence of superiority, such that it becomes unethical to withhold this treatment from the other patients.

1.5.3 Analysis Plan and Registration

An often overlooked source of bias has been termed the researcher degrees of freedom or garden of forking paths in the data analysis. For any set of data, there are many different options for its analysis: some results might be considered outliers and discarded, assumptions are made on error distributions and appropriate test statistics, different covariates might be included into a regression model. Often, multiple hypotheses are investigated and tested, and analyses are done separately on various (overlapping) subgroups. Hypotheses formed after looking at the data require additional care in their interpretation; almost never will \(p\) -values for these ad hoc or post hoc hypotheses be statistically justifiable. Many different measured response variables invite fishing expeditions , where patterns in the data are sought without an underlying hypothesis. Only reporting those sub-analyses that gave ‘interesting’ findings invariably leads to biased conclusions and is called cherry-picking or \(p\) -hacking (or much less flattering names).

The statistical analysis is always part of a larger scientific argument and we should consider the necessary computations in relation to building our scientific argument about the interpretation of the data. In addition to the statistical calculations, this interpretation requires substantial subject-matter knowledge and includes (many) non-statistical arguments. Two quotes highlight that experiment and analysis are a means to an end and not the end in itself.

There is a boundary in data interpretation beyond which formulas and quantitative decision procedures do not go, where judgment and style enter. ( Abelson 1995 )
Often, perfectly reasonable people come to perfectly reasonable decisions or conclusions based on nonstatistical evidence. Statistical analysis is a tool with which we support reasoning. It is not a goal in itself. ( Bailar III 1981 )

There is often a grey area between exploiting researcher degrees of freedom to arrive at a desired conclusion, and creative yet informed analyses of data. One way to navigate this area is to distinguish between exploratory studies and confirmatory studies . The former have no clearly stated scientific question, but are used to generate interesting hypotheses by identifying potential associations or effects that are then further investigated. Conclusions from these studies are very tentative and must be reported honestly as such. In contrast, standards are much higher for confirmatory studies, which investigate a specific predefined scientific question. Analysis plans and pre-registration of an experiment are accepted means for demonstrating lack of bias due to researcher degrees of freedom, and separating primary from secondary analyses allows emphasizing the main goals of the study.

Analysis Plan

The analysis plan is written before conducting the experiment and details the measurands and estimands, the hypotheses to be tested together with a power and sample size calculation, a discussion of relevant effect sizes, detection and handling of outliers and missing data, as well as steps for data normalization such as transformations and baseline corrections. If a regression model is required, its factors and covariates are outlined. Particularly in biology, handling measurements below the limit of quantification and saturation effects require careful consideration.

In the context of clinical trials, the problem of estimands has become a recent focus of attention. An estimand is the target of a statistical estimation procedure, for example the true average difference in enzyme levels between the two preparation kits. A main problem in many studies are post-randomization events that can change the estimand, even if the estimation procedure remains the same. For example, if kit B fails to produce usable samples for measurement in five out of ten cases because the enzyme level was too low, while kit A could handle these enzyme levels perfectly fine, then this might severely exaggerate the observed difference between the two kits. Similar problems arise in drug trials, when some patients stop taking one of the drugs due to side-effects or other complications.

Registration

Registration of experiments is an even more severe measure used in conjunction with an analysis plan and is becoming standard in clinical trials. Here, information about the trial, including the analysis plan, procedure to recruit patients, and stopping criteria, are registered in a public database. Publications based on the trial then refer to this registration, such that reviewers and readers can compare what the researchers intended to do and what they actually did. Similar portals for pre-clinical and translational research are also available.

1.6 Notes and Summary

The problem of measurements and measurands is further discussed for statistics in Hand ( 1996 ) and specifically for biological experiments in Coxon, Longstaff, and Burns ( 2019 ) . A general review of methods for handling missing data is Dong and Peng ( 2013 ) . The different roles of randomization are emphasized in Cox ( 2009 ) .

Two well-known reporting guidelines are the ARRIVE guidelines for animal research ( Kilkenny et al. 2010 ) and the CONSORT guidelines for clinical trials ( Moher et al. 2010 ) . Guidelines describing the minimal information required for reproducing experimental results have been developed for many types of experimental techniques, including microarrays (MIAME), RNA sequencing (MINSEQE), metabolomics (MSI) and proteomics (MIAPE) experiments; the FAIRSHARE initiative provides a more comprehensive collection ( Sansone et al. 2019 ) .

The problems of experimental design in animal experiments and particularly translation research are discussed in Couzin-Frankel ( 2013 ) . Multi-center studies are now considered for these investigations, and using a second laboratory already increases reproducibility substantially ( Richter et al. 2010 ; Richter 2017 ; Voelkl et al. 2018 ; Karp 2018 ) and allows standardizing the treatment effects ( Kafkafi et al. 2017 ) . First attempts are reported of using designs similar to clinical trials ( Llovera and Liesz 2016 ) . Exploratory-confirmatory research and external validity for animal studies is discussed in Kimmelman, Mogil, and Dirnagl ( 2014 ) and Pound and Ritskes-Hoitinga ( 2018 ) . Further information on pilot studies is found in Moore et al. ( 2011 ) , Sim ( 2019 ) , and Thabane et al. ( 2010 ) .

The deliberate use of statistical analyses and their interpretation for supporting a larger argument was called statistics as principled argument ( Abelson 1995 ) . Employing useless statistical analysis without reference to the actual scientific question is surrogate science ( Gigerenzer and Marewski 2014 ) and adaptive thinking is integral to meaningful statistical analysis ( Gigerenzer 2002 ) .

In an experiment, the investigator has full control over the experimental conditions applied to the experiment material. The experimental design gives the logical structure of an experiment: the units describing the organization of the experimental material, the treatments and their allocation to units, and the response. Statistical design of experiments includes techniques to ensure internal validity of an experiment, and methods to make inference from experimental data efficient.

PhET Home Page

  • Sign in / Register
  • Administration
  • Edit profile

model of experiment

The PhET website does not support your browser. We recommend using the latest version of Chrome, Firefox, Safari, or Edge.

Kolb’s Learning Styles and Experiential Learning Cycle

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

David Kolb published his learning styles model in 1984, from which he developed his learning style inventory.

Kolb’s experiential learning theory works on two levels: a four-stage learning cycle and four separate learning styles. Much of Kolb’s theory concerns the learner’s internal cognitive processes.

Kolb states that learning involves the acquisition of abstract concepts that can be applied flexibly in a range of situations. In Kolb’s theory, the impetus for the development of new concepts is provided by new experiences.

“Learning is the process whereby knowledge is created through the transformation of experience” (Kolb, 1984, p. 38).

The Experiential Learning Cycle

Kolb’s experiential learning style theory is typically represented by a four-stage learning cycle in which the learner “touches all the bases”:

learning cycle kolb

The terms “Reflective Cycle” and “Experiential Learning Cycle” are often used interchangeably when referring to this four-stage learning process. The main idea behind both terms is that effective learning occurs through a continuous cycle of experience, reflection, conceptualization, and experimentation.

  • Concrete Experience – the learner encounters a concrete experience. This might be a new experience or situation, or a reinterpretation of existing experience in the light of new concepts.
  • Reflective Observation of the New Experience – the learner reflects on the new experience in the light of their existing knowledge. Of particular importance are any inconsistencies between experience and understanding.
  • Abstract Conceptualization – reflection gives rise to a new idea, or a modification of an existing abstract concept (the person has learned from their experience).
  • Active Experimentation – the newly created or modified concepts give rise to experimentation. The learner applies their idea(s) to the world around them to see what happens.
Effective learning is seen when a person progresses through a cycle of four stages: of (1) having a concrete experience followed by (2) observation of and reflection on that experience which leads to (3) the formation of abstract concepts (analysis) and generalizations (conclusions) which are then (4) used to test a hypothesis in future situations, resulting in new experiences.

Kolb's Learning Cycle

Kolb (1984) views learning as an integrated process, with each stage mutually supporting and feeding into the next. It is possible to enter the cycle at any stage and follow it through its logical sequence.

However, effective learning only occurs when a learner can execute all four stages of the model. Therefore, no one stage of the cycle is effective as a learning procedure on its own.

The process of going through the cycle results in the formation of increasingly complex and abstract ‘mental models’ of whatever the learner is learning about.

Learning Styles

Kolb’s learning theory (1984) sets out four distinct learning styles, which are based on a four-stage learning cycle (see above). Kolb explains that different people naturally prefer a certain single different learning style.

Various factors influence a person’s preferred style. For example, social environment, educational experiences, or the basic cognitive structure of the individual.

Whatever influences the choice of style, the learning style preference itself is actually the product of two pairs of variables, or two separate “choices” that we make, which Kolb presented as lines of an axis, each with “conflicting” modes at either end.

A typical presentation of Kolb’s two continuums is that the east-west axis is called the Processing Continuum (how we approach a task), and the north-south axis is called the Perception Continuum (our emotional response, or how we think or feel about it).

Kolb's Learning Cycle

Kolb believed that we cannot perform both variables on a single axis simultaneously (e.g., think and feel). Our learning style is a product of these two choice decisions.

It’s often easier to see the construction of Kolb’s learning styles in terms of a two-by-two matrix. Each learning style represents a combination of two preferred styles.

The matrix also highlights Kolb’s terminology for the four learning styles; diverging, assimilating, and converging, accommodating:

  Active Experimentation (Doing) Reflective Observation (Watching)
Concrete Experience (Feeling) Accommodating (CE/AE) Diverging (CE/RO)
Abstract Conceptualization (Thinking) Converging (AC/AE) Assimilating (AC/RO)

Knowing a person’s (and your own) learning style enables learning to be orientated according to the preferred method.

That said, everyone responds to and needs the stimulus of all types of learning styles to one extent or another – it’s a matter of using emphasis that fits best with the given situation and a person’s learning style preferences.

Illustration showing a psychological model of the learning process for Kolb

Here are brief descriptions of the four Kolb learning styles:

Diverging (feeling and watching – CE/RO)

These people are able to look at things from different perspectives. They are sensitive. They prefer to watch rather than do, tending to gather information and use imagination to solve problems. They are best at viewing concrete situations from several different viewpoints.

Kolb called this style “diverging” because these people perform better in situations that require ideas-generation, for example, brainstorming. People with a diverging learning style have broad cultural interests and like to gather information.

They are interested in people, tend to be imaginative and emotional, and tend to be strong in the arts. People with the diverging style prefer to work in groups, to listen with an open mind and to receive personal feedback.

Assimilating (watching and thinking – AC/RO)

The assimilating learning preference involves a concise, logical approach. Ideas and concepts are more important than people.

These people require good, clear explanations rather than a practical opportunity. They excel at understanding wide-ranging information and organizing it in a clear, logical format.

People with an assimilating learning style are less focused on people and more interested in ideas and abstract concepts.  People with this style are more attracted to logically sound theories than approaches based on practical value.

This learning style is important for effectiveness in information and science careers. In formal learning situations, people with this style prefer readings, lectures, exploring analytical models, and having time to think things through.

Converging (doing and thinking – AC/AE)

People with a converging learning style can solve problems and will use their learning to find solutions to practical issues. They prefer technical tasks, and are less concerned with people and interpersonal aspects.

People with a converging learning style are best at finding practical uses for ideas and theories. They can solve problems and make decisions by finding solutions to questions and problems.

People with a converging learning style are more attracted to technical tasks and problems than social or interpersonal issues. A converging learning style enables specialist and technology abilities.

People with a converging style like to experiment with new ideas, to simulate, and to work with practical applications.

Accommodating (doing and feeling – CE/AE)

The Accommodating learning style is “hands-on,” and relies on intuition rather than logic. These people use other people’s analysis, and prefer to take a practical, experiential approach. They are attracted to new challenges and experiences, and to carrying out plans.

They commonly act on “gut” instinct rather than logical analysis. People with an accommodating learning style will tend to rely on others for information than carry out their own analysis. This learning style is prevalent within the general population.

Educational Implications

Both Kolb’s (1984) learning stages and the cycle could be used by teachers to critically evaluate the learning provision typically available to students, and to develop more appropriate learning opportunities.

Kolb

Educators should ensure that activities are designed and carried out in ways that offer each learner the chance to engage in the manner that suits them best.

Also, individuals can be helped to learn more effectively by the identification of their lesser preferred learning styles and the strengthening of these through the application of the experiential learning cycle.

Ideally, activities and material should be developed in ways that draw on abilities from each stage of the experiential learning cycle and take the students through the whole process in sequence.

Kolb, D. A. (1976). The Learning Style Inventory: Technical Manual . Boston, MA: McBer.

Kolb, D.A. (1981). Learning styles and disciplinary differences, in: A.W. Chickering (Ed.) The Modern American College (pp. 232–255). San Francisco, LA: Jossey-Bass.

Kolb, D. A. (1984). Experiential learning: Experience as the source of learning and development (Vol. 1). Englewood Cliffs, NJ: Prentice-Hall.

Kolb, D. A., & Fry, R. (1975). Toward an applied theory of experiential learning. In C. Cooper (Ed.), Studies of group process (pp. 33–57). New York: Wiley.

Kolb, D. A., Rubin, I. M., & McIntyre, J. M. (1984). Organizational psychology: readings on human behavior in organizations . Englewood Cliffs, NJ: Prentice-Hall.

Further Reading

  • How to Write a Psychology Essay
  • David Kolb’s Website
  • Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2008). Learning styles: Concepts and evidence. Psychological science in the public interest, 9(3) , 105-119.
  • What? So What? Now What? Reflective Model

Print Friendly, PDF & Email

  • Structure of Atom
  • Rutherford Atomic Model And Its Limitations

Rutherford Atomic Model and Limitations

Define rutherford atomic model.

Rutherford Atomic Model – The plum pudding model given by J. J. Thomson failed to explain certain experimental results associated with the atomic structure of elements. Ernest Rutherford, a British scientist conducted an experiment and based on the observations of this experiment, he explained the atomic structure of elements and proposed Rutherford’s Atomic Model.

Table of Contents

  • Rutherfords Alpha Scattering Experiment

Observations of Rutherford’s Alpha Scattering Experiment

Rutherford atomic model, limitations of rutherford atomic model, recommended videos, frequently asked questions – faqs.

BYJUS Classes Doubt solving

Rutherford’s Alpha Scattering Experiment

Rutherford conducted an experiment by bombarding a thin sheet of gold with α-particles and then studied the trajectory of these particles after their interaction with the gold foil.

Rutherford Atomic Model and Limitations

Rutherford, in his experiment, directed high energy streams of α-particles from a radioactive source at a thin sheet (100 nm thickness) of gold. In order to study the deflection caused to the α-particles, he placed a fluorescent zinc sulphide screen around the thin gold foil. Rutherford made certain observations that contradicted Thomson’s atomic model .

The observations made by Rutherford led him to conclude that:

  • A major fraction of the α-particles bombarded towards the gold sheet passed through the sheet without any deflection, and hence most of the space in an atom is empty .
  • Some of the α-particles were deflected by the gold sheet by very small angles, and hence the positive charge in an atom is not uniformly distributed . The positive charge in an atom is concentrated in a very small volume .
  • Very few of the α-particles were deflected back, that is only a few α-particles had nearly 180 o angle of deflection. So the volume occupied by the positively charged particles in an atom is very small as compared to the total volume of an atom .

Based on the above observations and conclusions, Rutherford proposed the atomic structure of elements. According to the Rutherford atomic model:

  • The positive charge and most of the mass of an atom is concentrated in an extremely small volume. He called this region of the atom as a nucleus.
  • Rutherford’s model proposed that the negatively charged electrons surround the nucleus of an atom. He also claimed that the electrons surrounding the nucleus revolve around it with very high speed in circular paths. He named these circular paths as orbits.
  • Electrons being negatively charged and nucleus being a densely concentrated mass of positively charged particles are held together by a strong electrostatic force of attraction.

Although the Rutherford atomic model was based on experimental observations, it failed to explain certain things.

  • Rutherford proposed that the electrons revolve around the nucleus in fixed paths called orbits. According to Maxwell, accelerated charged particles emit electromagnetic radiations and hence an electron revolving around the nucleus should emit electromagnetic radiation. This radiation would carry energy from the motion of the electron which would come at the cost of shrinking of orbits. Ultimately the electrons would collapse in the nucleus. Calculations have shown that as per the Rutherford model, an electron would collapse into the nucleus in less than 10 -8 seconds. So the Rutherford model was not in accordance with Maxwell’s theory and could not explain the stability of an atom .
  • One of the drawbacks of the Rutherford model was also that he did not say anything about the arrangement of electrons in an atom which made his theory incomplete.
  • Although the early atomic models were inaccurate and failed to explain certain experimental results, they formed the base  for future developments in the world of quantum mechanics .

Register with BYJU’S to learn more topics of chemistry such as Hybridization, Atomic Structure models and more.

Lactose

The Gold Foil Experiment

model of experiment

Structure of Atom Class 11 Chemistry

model of experiment

Drawbacks of Rutherford Atomic Model

model of experiment

What was the speciality of Rutherford’s atomic model?

Rutherford was the first to determine the presence of a nucleus in an atom. He bombarded α-particles on a gold sheet, which made him encounter the presence of positively charged specie inside the atom.

What is Rutherford’s atomic model?

Rutherford proposed the atomic structure of elements. He explained that a positively charged particle is present inside the atom, and most of the mass of an atom is concentrated over there. He also stated that negatively charged particles rotate around the nucleus, and there is an electrostatic force of attraction between them.

What are the limitations of Rutherford’s atomic model?

Rutherford failed to explain the arrangement of electrons in an atom. Like Maxwell, he was unable to explain the stability of the atom.

What kind of experiment did Rutherford’s perform?

Rutherford performed an alpha scattering experiment. He bombarded α-particles on a gold sheet and then studied the trajectory of these α-particles.

What was the primary observation of Rutherford’s atomic model?

Rutherford observed that a microscopic positively charged particle is present inside the atom, and most of the mass of an atom is concentrated over there.

Quiz Image

Put your understanding of this concept to test by answering a few MCQs. Click ‘Start Quiz’ to begin!

Select the correct answer and click on the “Finish” button Check your score and answers at the end of the quiz

Visit BYJU’S for all Chemistry related queries and study materials

Your result is as below

Request OTP on Voice Call

CHEMISTRY Related Links

Leave a Comment Cancel reply

Your Mobile number and Email id will not be published. Required fields are marked *

Post My Comment

model of experiment

I am very much happy with the answer i got from this site, because you provide me with clearest and more understandable answer more than I expect. i really recommended the site, thanks.

What can I say, you people you’re the best! coz I tried to search for the failures of Rutherford any where and couldn’t but with you I found them. Thanks 🙏 I therefore recommend others to use this platform in order to get what they are searching for. I love guys

this clearer than I xpect. thanks alot

I am very happy with the answer that I obtained, however Ernest Rutherford’s Atomic Model never had any neutrons in the nucleus. James Chadwick discovered the neutron later in 1932. However, the limitations and observations of his theory on this web page seem to be correct.

It is very useful to me thaks

It’s a brilliant website with clear concept. Answers provided are really up to mark with best quality .

I ‘m contented with the explanation given. It seems much clearer with the interpretations following the observations.

I’m highly thankful to this website which made me clear about Rutherford method I was confused about this and worried also butt I’m now damn confident that it will surely help me and I’ll get good grades ❤️

Loads of love❤️

model of experiment

Register with BYJU'S & Download Free PDFs

Register with byju's & watch live videos.

Experiment and constitutive modelling of creep deformation in the frozen silt-concrete interface

  • Original Article
  • Published: 10 September 2024
  • Volume 21 , pages 3172–3185, ( 2024 )

Cite this article

model of experiment

  • Fei He   ORCID: orcid.org/0009-0007-8607-8878 1 ,
  • Qingquan Liu   ORCID: orcid.org/0009-0002-4714-6763 1 ,
  • Wanyu Lei   ORCID: orcid.org/0009-0005-6078-039X 1 ,
  • Xu Wang   ORCID: orcid.org/0000-0003-2833-9770 1 , 2 ,
  • Erqing Mao   ORCID: orcid.org/0009-0007-4337-4431 1 ,
  • Sheng Li   ORCID: orcid.org/0000-0003-0118-3876 1 &
  • Hangjie Chen   ORCID: orcid.org/0009-0008-9252-3920 1  

To ensure the long-term safety and stability of bridge pile foundations in permafrost regions, it is necessary to investigate the rheological effects on the pile tip and pile side bearing capacities. The creep characteristics of the pile-frozen soil interface are critical for determining the long-term stability of permafrost pile foundations. This study utilized a self-developed large stress-controlled shear apparatus to investigate the shear creep characteristics of the frozen silt-concrete interface, and examined the influence of freezing temperatures (−1, −2, and −5°C), contact surface roughness (0, 0.60, 0.75, and 1.15 mm), normal stress (50, 100, and 150 kPa), and shear stress on the creep characteristics of the contact surface. By incorporating the contact surface’s creep behavior and development trends, we established a creep constitutive model for the frozen silt-concrete interface based on the Nishihara model, introducing nonlinear elements and a damage factor. The results revealed significant creep effects on the frozen silt-concrete interface under constant load, with creep displacement at approximately 2–15 times the instantaneous displacement and a failure creep displacement ranging from 6 to 8 mm. Under different experimental conditions, the creep characteristics of the frozen silt-concrete interface varied. A larger roughness, lower freezing temperatures, and higher normal stresses resulted in a longer sample attenuation creep time, a lower steady-state creep rate, higher long-term creep strength, and stronger creep stability. Building upon the Nishihara model, we considered the influence of shear stress and time on the viscoelastic viscosity coefficient and introduced a damage factor to the viscoplasticity. The improved model effectively described the entire creep process of the frozen silt-concrete interface. The results provide theoretical support for the interaction between pile and soil in permafrost regions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Availability of Data/Materials: The related data in this study are available in the supplementary information. For further detailed information, you can contact the corresponding author.

Aldaeef AA, Rayhani MT (2021) Pile-soil interface characteristics in ice-poor frozen ground under varying exposure temperature. Cold Reg Sci Technol 191: 103377. https://doi.org/10.1016/j.coldregions.2021.103377

Article   Google Scholar  

Bray MT (2012) The influence of cryostructure on the creep behavior of ice-rich permafrost. Cold Reg Sci Technol 79–80:43–52. https://doi.org/10.1016/j.coldregions.2012.04.003

Chen X, Zhang JS, Xiao YJ, et al. (2015) Effect of roughness on shear behavior of red clay-concrete interface in large-scale direct shear tests. Can Geotech J 52(8):1122–1135. https://doi.org/10.1139/cgj-2014-0399

Deng RG, Zhou DP, Zhang ZY, et al. (2001) A new rheological model for rocks. Chinese Journal of Rock Mechanics and Engineering 20(06): 780–784. (In Chinese)

Google Scholar  

Gao Q, Wen Z, Zhou ZW, et al. (2022.) A creep model of pile-frozen soil interface considering damage effect and ice effect. Int J Damage Mech 2022(1):31 https://doi.org/10.1177/10567895211019067

Guo ZY, Xu XT, Wang YT, et al. (2023) Significance analysis of the factors influencing the strength of the frozen soil-structure interface and their interactions in different phase transition zones. Case Stud 50: 103475. https://doi.org/10.1016/j.csite.2023.103475

Hammoud F, Ah Boumekik (2006) Experimental study of the behaviour of interfacial shearing between cohesive soils and solid materials at large displacement. Asian J Civ Eng (Building and Housing) 7(1): 63–79.

He F, Wang X, Jiang DJ, et al. (2024) Creep Characteristics of Frozen Sand-Concrete Interface Based on Ice Content. Journal of Southwest Jiaotong University 59(2): 361–368. (In Chinese) https://doi.org/10.3969/j.issn.0258-2724.20220552

He PF, Mu YH, Yang ZH, et al. (2020) Freeze–thaw cycling impact on the shear behavior of frozen soil-concrete interface. Cold Reg Sci Technol 173: 103024. https://doi.org/10.1016/j.coldregions.2020.103024

Hou F, Lai YM, Liu EL, et al. (2018) A creep constitutive model for frozen soils with different contents of coarse grains. Cold Reg Sci Technol 145:119–126. https://doi.org/10.1016/j.coldregions.2017.10.013

Hou RB, Zhang K, Tao J, et al. (2018) A Nonlinear Creep Damage Coupled Model for Rock Considering the Effect of Initial Damage. Rock Mech Rock Eng 52(2): 1–11. https://doi.org/10.1007/s00603-018-1626-7

Lai YM, Li SY, Qi JL, et al. (2008) Strength distributions of warm frozen clay and its stochastic damage constitutive model. Cold Reg Sci Technol 53: 200–215. https://doi.org/10.1016/j.coldregions.2007.11.001

Li DW, Fan JH, Wang RH, (2011) Research on visco-elastic-plastic creep model of artificially frozen soil under high confining pressures. Cold Reg Sci Technol 65(2): 219–225. https://doi.org/10.1016/j.coldregions.2010.08.006

Li DW, Zhang CC, Ding GS, et al. (2020) Fractional derivative-based creep constitutive model of deep artificial frozen soil. Cold Reg Sci Technol 170: 102942. https://doi.org/10.1016/j.coldregions.2019.102942

Liu JK, Lv P, Cui, YH, et al. (2014) Experimental study on direct shear behavior of frozen soil-concrete interface. Cold Reg Sci Technol 104: 1–6. https://doi.org/10.1016/j.coldregions.2014.04.007

Luo F, Zhang YZ, Zhu ZY, et al. (2020) Creep constitutive model for frozen sand of Qinghai-Tibet Plateau. Journal of HIT 52(02): 26–32. (In Chinese) https://doi.org/10.11918/201810053 .

Qiu PY, Tang LY, Zheng JG, et al. (2023) Experimental investigations on the shear strength and creep properties of soil-rock mixture under freeze–thaw cycles. Cold Reg Sci Technol 2023:104037. https://doi.org/10.1016/j.coldregions.2023.104037

Shao YW, Yao JL, et al. (2023) Creep Characteristic Test and Creep Model of Frozen Soil. Sustainability 15(5): 3984. https://doi.org/10.3390/su15053984

Song BT, Liu E L, Shi Z.Y, et al. (2021) Creep characteristics and constitutive model for frozen mixed soils. J Mt Sci 18(7): 1966–1976. https://doi.org/10.1007/s11629-020-6463-y

Sun K, Chen ZL, Chen J, et al. (2015) A modified creep constitutive equation for frozen soil based on Nishihara model. RSM 36(S1): 142–146. (In Chinese) https://doi.org/10.16285/j.rsm.2015.S1.024

CAS   Google Scholar  

Sun TC, Gao XJ, Liao YM, et al. (2021) Experimental study on adfreezing strength at the interface between silt and concrete. Cold Reg Sci Technol 190: 103346. https://doi.org/10.1016/j.coldregions.2021.103346

Tan TK, Kang WF (1980) Locked in stresses, creep and dilatancy of rocks, and constitutive equations. Rock Mech Rock Eng 13(1): 5–22. https://doi.org/10.1007/BF01257895

Vyalov C.C (2005) Rheology of Permafrost. Chian Railway Publishing House. (In Chinese)

Wen Z, Yu QH, Ma W, et al. (2016) Experimental investigation on the effect of fiberglass reinforced plastic cover on adfreeze bond strength. Cold Reg Sci Technol 131: 108–115. https://doi.org/10.1016/j.coldregions.2016.07.009

Xu GW, He C, Yan J, et al. (2019) A new transversely isotropic nonlinear creep model for layered phyllite and its application. Bull Eng Geol Environ 78: 5387–5408. https://doi.org/10.1007/s10064-019-01462-w

Article   CAS   Google Scholar  

Yang YG, Lai YM, Chang XX, (2010) Experimental and theoretical studies on the creep behavior of warm ice-rich frozen sand. Cold Reg Sci Technol 63(1):61–67. https://doi.org/10.1016/j.coldregions.2010.04.011

Yu M, Mao XB, Hu XY, (2016) Shear creep characteristics and constitutive model of limestone. Int J Min Sci Technol 26(03): 423–428. https://doi.org/10.1016/j.ijmst.2016.02.009

Zhang BW, Hu H, Yu W, et al. (2019) Timeliness of Creep Deformation in the Whole Visco-Elasto-Plastic Process of Surrounding Rocks of the Tunnel. Geotech 37(2): 1007. https://doi.org/10.1007/s10706-018-0668-7

Zhang CC, Li DW, Luo CT, et al. (2022) Research on creep characteristics and the model of artificial frozen soil. Adv Mater Sci Eng 2022: 2891673. https://doi.org/10.1155/2022/2891673

Zhang LX, Xiong ZW, Han LW (2011) Frozen soil environment and engineering of Qinghai Tibet Railway. Beijing: CC Press. pp 281–283. (In Chinese)

Zhang Q, Zhang JJ, Wang HL, et al. (2021) Mechanical behavior and constitutive relation of the interface between warm frozen silt and cemented soil. Transp Geotech 30: 100624. https://doi.org/10.1016/j.trgeo.2021.100624

Zhao LZ, Yang P, Zhang LC, et al. (2017) Cyclic direct shear behaviors of an artificial frozen soil-structure interface under constant normal stress and sub-zero temperature. Cold Reg Sci Technol 133: 70–81. https://doi.org/10.1016/j.coldregions.2016.10.011 .

Zhao YL, Cao P, Wen YJ, et al. (2008) Elastovisco-plastic rheological experiment and nonlinear rheological model of rocks. CJRME. 27(03): 477–486. (In Chinese)

Zheng ZM, Yang Y, Pan C (2023) Determination of the parameters of rock viscoelastic creep model and analysis of parameter degradation. Sci Rep 13: 5739. https://doi.org/10.1038/s41598-023-32565-w

Zhu ZY, Luo F, Zhang YZ, et al. (2019) A creep model for frozen sand of Qinghai-Tibet based on Nishihara model. Cold Reg Sci Technol 167: 102843. https://doi.org/10.1016/j.coldregions.2019.102843

Download references

Acknowledgments

We acknowledged financial support from the National Natural Science Foundation of China (41902272) and Gansu Province Basic Research Innovation Group Project (21JR7RA347).

Author information

Authors and affiliations.

School of Civil Engineering, Lanzhou Jiaotong University, Lanzhou, 730070, China

Fei He, Qingquan Liu, Wanyu Lei, Xu Wang, Erqing Mao, Sheng Li & Hangjie Chen

Key Laboratory of Road & Bridge and Underground Engineering of Gansu Province, Lanzhou, 730070, China

You can also search for this author in PubMed   Google Scholar

Contributions

HE Fei: conceptualization, validation, writing-original draft. LIU Qingquan: data curation, writing-review and editing. LEI Wanyu: data curation, visualization, investigation. WANG Xu: Funding acquisition. MAO Erqing: methodology. LI Sheng: formal analysis, writing-review and editing. CHEN Hangjie: resources, supervision.

Corresponding author

Correspondence to Fei He .

Ethics declarations

Conflict of Interest: The authors declare no conflicts of interest.

Rights and permissions

Reprints and permissions

About this article

He, F., Liu, Q., Lei, W. et al. Experiment and constitutive modelling of creep deformation in the frozen silt-concrete interface. J. Mt. Sci. 21 , 3172–3185 (2024). https://doi.org/10.1007/s11629-024-8787-5

Download citation

Received : 28 March 2024

Revised : 21 July 2024

Accepted : 28 July 2024

Published : 10 September 2024

Issue Date : September 2024

DOI : https://doi.org/10.1007/s11629-024-8787-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Creep characteristics
  • Contact surface
  • Frozen silt
  • Constitutive model
  • Freezing temperature
  • Find a journal
  • Publish with us
  • Track your research

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 09 September 2024

Structural model and characteristics of entrepreneurial opportunity recognition abilities among university students in China: a grounded theory approach

  • Wang Fei 1 &
  • Xue Shuangyan 2  

Humanities and Social Sciences Communications volume  11 , Article number:  1166 ( 2024 ) Cite this article

Metrics details

  • Business and management
  • Information systems and information technology

The rapid development of mass entrepreneurship and innovation in China has substantially influenced the entrepreneurial environment, particularly among university students who are emerging as a vital entrepreneurial force. This study aimed to explore and identify the key characteristics and structural framework of entrepreneurial talents among university student entrepreneurs in China, specifically concerning their capacity to discover entrepreneurial opportunities. To achieve this, we conducted interviews with 21 exceptional university student entrepreneurs. A structural model was subsequently developed based on grounded theory, with data coded at three levels and tested for theoretical saturation. The results demonstrated that Chinese university students possess implicit and explicit capacities to detect business possibilities. Implicit abilities largely encompass the innate qualities of entrepreneurial drive and environmental intelligence, while explicit abilities comprise the acquired skills of learning, networking, and integration. This model can be used as a benchmark for assessing and enhancing the capacity of Chinese university students who are aspiring entrepreneurs to identify and exploit entrepreneurial opportunities.

Similar content being viewed by others

model of experiment

Individual entrepreneurial orientation for entrepreneurial readiness

model of experiment

Designing a framework for entrepreneurship education in Chinese higher education: a theoretical exploration and empirical case study

model of experiment

College students’ entrepreneurship policy, regional entrepreneurship spirit, and entrepreneurial decision-making

Introduction.

China has recently committed to improving its business environment, leading to a significant increase in independent entrepreneurship known as “mass entrepreneurship and innovation”. This trend has ignited multiple new market dynamics, driving the country’s steady economic growth. University students, recognized for their youthful vigor and rigorous education, play an increasingly important role as a key group of entrepreneurs, making substantial contributions to China’s entrepreneurial sector. However, the contemporary entrepreneurial landscape among university students is characterized by a phenomenon known as “one high and two lows”, in which there is a strong aspiration to initiate a firm, but the actual execution and achievement rates are notably low (Lu, 2019 ). The overall entrepreneurial situation remains challenging. According to the 2022 China College Graduate Employment Report by Mycos, only 1.2% of self-employed graduates belonged to the Chinese 2021 undergraduate cohort (Mycos McKesson Research Institute, 2022 ). Moreover, the difficulties faced by student entrepreneurs in staying afloat have become more severe, resulting in a dropout rate of 58.5% within three years (Wang, 2022 ). Opportunity identification is a crucial element in the entrepreneurial process for university students. Accurate comprehension and identification of entrepreneurial opportunities are crucial for the success of student initiatives. This underscores the urgent need to conduct research into opportunity identification within the context of university student entrepreneurship.

The questions of “how to effectively identify entrepreneurial opportunities” and “who can identify entrepreneurial opportunities” have garnered significant attention in both academic research and practical applications (Ardichvili et al. 2003 ). Historically, the majority of empirical studies have considered opportunity identification as a singular dimension (Chen and Yang, 2009 ). As researchers continue to investigate, the study of opportunity identification has evolved from a one-dimensional perspective to a multi-dimensional approach (Hills et al. 1999 ). Hansen was the first to empirically demonstrate that opportunity identification can be conceptualized as a multi-dimensional process comprising stages such as preparation, incubation, insight, evaluation, and elaboration (Hansen et al. 2011 ). While previous studies have divided the concept of opportunity identification into various aspects, they have not completely clarified the connections between these aspects. Moreover, while various capabilities may impact entrepreneurial opportunity identification, there is no consensus on the specific capabilities essential for university students in identifying entrepreneurial opportunities.

Therefore, this study seeks to address the following research questions: First, aside from the ability to recognize and comprehend new opportunities, what other elements should be incorporated into the entrepreneurial opportunity identification competence of university students? Second, what is the correlation between these factors? Third, what is the conceptual description of the entrepreneurial opportunity identification ability of university students? This study specifically examines the entrepreneurial process of university students in China in order to address these problems adequately. Utilizing the grounded theory research method, the study aims to summarize the structural characteristics of entrepreneurial opportunity identification capability among Chinese university students. The goal is to establish a precise definition of this capability and develop a core element model for understanding entrepreneurial opportunity identification capability among university students. This research will serve as a foundation for future studies that assess the ability of university students to identify entrepreneurial opportunities.

Literature review

Study of entrepreneurial opportunity identification capabilities.

The concept of entrepreneurial opportunity identification capability of university students originates from extensive study on entrepreneurial opportunity identification. Opportunity identification is a key variable in entrepreneurial research, as Corbett emphasized it as a core element in entrepreneurship studies (Corbett, 2005 ). Scholars, both domestic and international, have typically divided their perspectives on identifying entrepreneurial opportunities into three main schools of thought: the Austrian School of Economics, represented by Kirzner (Kirzner, 1997 ); the cognitive school of thought, represented by Baron (Baron et al. 2003 ); and the process school of thought, represented by Long and McMullan (Long and McMullan, 1984 ).

Various scholars have provided definitions and explanations of entrepreneurial opportunity identification from different perspectives. From an economic standpoint, entrepreneurial opportunities emerge when there is a mismatch between product and market demand. Scholars from the Austrian School of Economics attribute opportunity identification to information heterogeneity (Kaish and Gilad, 1991 ). This view posits that individuals with different information make up the market, leading some to identify opportunities that others cannot. From a cognitive perspective, entrepreneurial opportunity identification relies on the cognitive processing of entrepreneurs. Within this domain, scholars have extensively discussed entrepreneurs’ alertness to potential opportunities and their cognitive frameworks. For instance, Baugher and Roberts emphasized the role of perceptual awareness in identifying entrepreneurial opportunities (Baugher and Roberts, 1999 ). Furthermore, scholars have explored the identifiable patterns or characteristics of opportunities using feature analysis and prototype models (Suddaby et al. 2015 ). Entrepreneurial opportunity identification is considered a multi-stage, difficult process from a process perspective, and scholars have put forth many models to explain this process. For example, Long and McMullan delineated a new model consisting of four stages: “preconceived notion, opportunity discovery, opportunity elaboration, and decision-making”(Long and McMullan, 1984 ). On the other hand, Smith proposed a five-step process encompassing preparation, incubation, in-depth insight evaluation, and elaboration (Smith et al. 2009 ). Lumpkin and Zhang described entrepreneurial opportunity identification as a cognitive process involving the capturing and judgment of opportunities, divided into opportunity search and opportunity evaluation stages (Lumpkin and Lichtenstein, 2005 ; Zhang and Sun, 2012 ). Zhu and Zou analyzed the impact of entrepreneurial spirit on entrepreneurial opportunity identification among returnees. They categorized entrepreneurial opportunity identification into three dimensions: opportunity search, opportunity discovery, and opportunity evaluation (Zhu and Zou, 2016 ). Yin and Cai suggested that opportunity identification capability involves discerning and discovering opportunities through proactive learning, environmental observation, and information collection (Yin and Cai, 2012 ).

Review of existing research

Both domestic and international scholars have extensively studied the ability to identify entrepreneurial opportunities, significantly influencing entrepreneurship education practices and establishing a solid foundation for subsequent entrepreneurship research. However, there is sometimes inconsistency across these studies regarding the criteria employed to categorize the identification of entrepreneurial opportunities (Davidsson, 2015 ). Different perspectives, such as the economic, cognitive, and process perspectives, are different ways of looking at things and do not contradict each other when it comes to explaining how entrepreneurs identify opportunities.

When examining entrepreneurial cognition and processes, the capability to identify entrepreneurial opportunities is often viewed as a set of skills. However, there is a lack of detailed exploration of the nature of each element and their interrelationships. Limited research has focused on understanding the specific internal mechanisms of entrepreneurial opportunity identification capability, particularly among university students. The uniqueness of university student entrepreneurial groups lies in their capacity for growth, the availability of valuable resources, and the effectiveness of social networks. Thus, applying research on general entrepreneurs to university student entrepreneurs has limited applicability.

From a Chinese research standpoint, although some researchers have begun to focus on university student entrepreneurship, significant cultural differences between Eastern and Western cultures exist. In China’s unique collectivist cultural environment, social networks play a decisive role in entrepreneurial opportunity identification (Yang et al. 2014 ). This cultural context makes it challenging to directly apply relevant theories from foreign entrepreneurship research to study the entrepreneurial opportunity identification capability of Chinese university students. While a handful of scholars, such as Wang and Zhang, Wang and Yao have delved into theoretical explorations of university student entrepreneurial opportunity identification (Wang and Zhang, 2017 ; Wang and Yao, 2014 ), there remains a lack of qualitative or empirical research examining the constituent elements of this ability.

Given the progress made in innovation and entrepreneurship education in China, it is crucial to prioritize improving theoretical research on university student’s ability to identify entrepreneurial opportunities, which is necessary to meet the requirements of a development model driven by innovation. To fill this void, this research utilizes a grounded theory methodology in the Chinese setting to construct a framework that illustrates the potential of university students to identify entrepreneurial opportunities. The objective is to enhance theoretical research on innovation and entrepreneurship education in universities and establish a basis for evaluating and improving Chinese university student’s entrepreneurial opportunity identification capability.

Research design

Research methodology.

Currently, there is a scarcity of studies investigating the structural aspects of entrepreneurial opportunity identification abilities among Chinese university students. Additionally, the existing literature lacks sufficient theoretical support for comprehending the intricacies of these abilities in this particular context. Given the rapidly evolving and complex cognitive and behavioral processes involved in opportunity identification among university student entrepreneurs, conducting quantitative research in this area may encounter significant limitations.

This study employed the grounded theory approach as its research methodology, undertaking data collection and analysis to derive and develop theory from the data (Chen, 1999 ). Employing the classic grounded theory approach, the study aimed to explore the “university student’s entrepreneurial opportunity identification capability” through data coding and analysis, following the research process outlined in Fig. 1 (Fang et al. 2024 ; Zhao et al. 2023 ). The process of continuously comparing coding results from interview transcripts and iterative analysis resulted in theoretical saturation, which allowed for the development of a conceptual definition, refinement of dimensions, and construction of a conceptual model for the “university student’s entrepreneurial opportunity identification capability.”

figure 1

Coding and data analysis process.

Data collection

The research data for this study were obtained through individual, in-depth interviews using a purposive sampling approach. A total of 21 outstanding university student entrepreneurs, including both current students and graduates, were selected from a sample pool of several universities in the Yangtze River Delta, Pearl River Delta, Northeast China, and Central and Western regions. The universities mentioned were included in the annual National Innovation and Entrepreneurship Typical Experience database released by the Ministry of Education from 2016 to 2019. These outstanding university student entrepreneurs were selected based on the following criteria: (1) entrepreneurial deeds serving as typical cases of university student’s innovation and entrepreneurship in their schools; (2) profitability of the enterprises they founded; (3) entrepreneurship spanning no more than three years, to avoid memory degradation and hindsight bias and ensure data reliability (Wang, 2015 ).

These selected entrepreneurs give high-quality data because they are deeply involved in the entrepreneurial process, hold authentic insights into entrepreneurial matters, and are successful examples of university student entrepreneurship. Their high entrepreneurial skills and strong personal influence make their interviews valuable for understanding and refining the entrepreneurial opportunity identification abilities of university students. Among the selected entrepreneurs, three interviewees were from “Double First-Class” universities, 15 from local high-level universities, and three from general undergraduate institutions.

To facilitate effective collaboration with participants and encourage genuine expression of their emotions, the interviews were conducted in an environment maintained at a temperature of 17–24 °C and a relative humidity of 30%–50%. The interview duration was limited to 30–60 min to prevent participant fatigue. Basic information about the research participants is presented in Table 1 .

Research process

The entire research process commenced in January 2023 and concluded in May 2023. All interviewees were scheduled in advance, and the interviews were conducted according to a predefined interview outline. Before the formal interviews, pre-interviews were conducted to ensure the collected data aligned with the research objectives. Furthermore, an entrepreneurship mentor was invited to provide input and adjustments to the introduction and questions, ensuring the effectiveness of the interviews. Due to the substantial number of interviewees, dispersed locations, and extensive information needed, recordings and notes were utilized to document the interviews with the consent of the participating students. The total duration of the interviews amounted to 26 h, resulting in transcripts of over 200,000 words based on the recorded interviews. Within 24 h of completing the interviews, the interview data was promptly organized, and any insights obtained during the interviews were recorded in the form of memoranda.

Model construction of university student entrepreneurs’ abilities in recognizing entrepreneurial opportunities

Open coding.

The method of open coding was used with great care to create first ideas and develop conceptual categories by systematically encoding and labeling. The researcher organized and sorted the interview transcripts of 21 university student entrepreneurs, ensuring the preservation of the original semantics of the data. A labeling system using “F+serial number” was implemented to identify the transcripts, with initial concepts derived from the original statements or terms in the interview content. By empathizing with the connotations conveyed by the respondents through latent data, we may understand the deeper meanings behind their statements, which were influenced by their environmental perceptions, subjective views, personal feelings, and other relevant information.

For example, the respondents’ expressions like “In the field of entrepreneurship, opportunity is about identifying problems that others cannot solve, or problems that others have already solved but not well enough”, “You can start a business with some small details in every aspect of your life” and “Identifying some equipment problems in the research, thinking about ways to improve them, and unearthing potential business opportunities”, all statements conveyed similar meaning orientations. Due to the frequent use of the term “problem” within the interview content, with a significant focus on “identifying problems,” the study classified these remarks as “the ability to identify problems.” After multiple rounds of comparison and merging, a total of 603 codes were generated, resulting in the identification of 92 nodes labeled as “a+serial number”(as seen in Table 2 ).

The labels were then further refined based on the content of the interview questions, the discussed themes, and the context in which they were used. Initial concepts were extracted by combining the summarizing vocabulary of the interviewees and by reviewing the relevant literature. Expressions such as “reciprocal activities with others” and “like to socialize” were summarized, and attention was paid to the summary expressions before and after the relevant statements of the interviewees, such as “communicating with superior and subordinate units, and mastering some truths about how to deal with the world” and “trustworthy interpersonal relationships are important to me”. As a result, it was concluded that the important nodes indicate the significance of interpersonal ties in the entrepreneurial process. Consequently, the original idea of “willingness to build relationships” was identified. From this process, 26 original concepts labeled as “A+serial number” were extracted through the conceptualization and organization of the raw data. Further details are provided in Table 3 .

Axial Coding

The axial coding process involves revealing latent logical connections between categories by considering many forms of interactions, such as causality, contextual ties, similarity, and semantic relationships. These relationships inform the extraction of both primary categories and subcategories. Axial coding is mostly used to explore the interconnectedness and similarities among conceptual categories. The research involves the classification and organization of the 92 nodes and 26 basic notions gained through open coding. For example, the initial concepts “A6 emotional recovery ability, A8 reflective ability, A11 understanding of failure” are all related to “entrepreneurial failure,” and the terms “recovery, reflection, understanding” all involve the category of “learning.” In analyzing the interviews related to entrepreneurial failure, it was evident that there was a logical progression from self-reflection to taking practical actions to address the problems after experiencing entrepreneurial failure. These three concepts were then synthesized into the overarching concept of “Learning from Failure” according to the theory of entrepreneurial learning from failure (Tang et al. 2021 ). Considering the changing nature of entrepreneurship among university students, we have identified five main categories (designated as “B+serial numbers”) and eleven subcategories (designated as “b+serial numbers”). Further details are provided in Table 4 .

Selective Coding

Through systematic analysis, a “core category” is identified from the existing conceptual categories and subcategories. This process establishes the relationship between the main and the core categories, ultimately constructing a comprehensive theoretical model. Building upon open coding and axial coding, this research, in conjunction with interview data, has focused on the central domain of “university student entrepreneurs’ abilities in recognizing entrepreneurial opportunities”. Through iterative discussions and contemplation, a highly abstract generalization has been developed, revealing its inherent logical structure and ultimately linking the storylines within the main categories. (To be elaborated in Part 5).

Saturation Test

To ensure the scientific rigor of grounded theory research and the precision of its conclusions, it is imperative to continually seek new evidence and engage in theoretical sampling. During this process, continuous comparisons, analyses, and modifications of existing categories are required until no new categories emerge, thereby achieving theoretical saturation.

In this study, 15 cases were selected as random samples for analysis. The Pandit NR method (Glaser et al. 1968 ) was employed for theoretical saturation testing. Subsequently, the remaining six cases underwent a three-level coding process within the framework of grounded theory. The results continued to align with the model of “university student entrepreneurs’ abilities in recognizing entrepreneurial opportunities”, with no new categories emerging. Therefore, the constructed theoretical model can be considered saturated.

Model and constituent elements of university student entrepreneurs’ abilities in recognizing entrepreneurial opportunities

The research methodology based on grounded theory has been used to outline the framework of university student’s ability to identify entrepreneurial opportunities. This ability can be conceptualized as a multi-faceted skill set comprised of entrepreneurial drive, environmental insight, learning abilities, networking abilities, and integration abilities. Essentially, it represents a dynamic competency among university students venturing into entrepreneurship, enabling them to adeptly adapt and synchronize their knowledge and technological resources in identifying and capitalizing on entrepreneurial opportunities in response to environmental shifts.

McClelland ( 1973 ) proposed the influential iceberg model, which partitions individual competence into the measurable portion (“above the iceberg”) and the hidden aspect (“below the iceberg”). Consequently, while formulating the model for university student’s entrepreneurial opportunity recognition ability, it is essential to thoroughly examine both the externally measurable competencies and the latent attributes of university entrepreneurs. Utilizing the iceberg paradigm, this study classifies university students’ entrepreneurial opportunity recognition ability into implicit and explicit capabilities. Notably, entrepreneurial drive and environmental insight represent relatively concealed “soft” skills that significantly influence university entrepreneurs’ ability to recognize opportunities (Deng et al. 2011 ). Conversely, learning abilities, networking abilities, and integration abilities are deemed as more overt “hard” skills that can be seen, measured, and improved. The outlined model is depicted in Fig. 2 .

figure 2

Structural Model of University Student Entrepreneurs’ Abilities in Recognizing Entrepreneurial Opportunities.

Implicit capabilities

Implicit capabilities, such as self-conceptual characteristics and personal traits, can be likened to submerged icebergs, lying beneath the surface and eluding easy observation yet playing a vital role in shaping individuals’ behaviors (Wang and Chen, 2002 ). Profound implicit abilities can serve as a powerful driving force for university students in identifying entrepreneurial opportunities and fostering personal enthusiasm, initiative, and creativity.

Entrepreneurial drive

Entrepreneurial drive can be divided into two components: endogenous entrepreneurial drive and exogenous entrepreneurial drive. Endogenous entrepreneurial drive originates from university students’ desire for self-realization and personal growth, encompassing their entrepreneurial beliefs and pursuit of self-value. On the other hand, exogenous entrepreneurial drive stems from external influences in the entrepreneurial exosystem, such as the pursuit of economic value, alleviation of employment pressure, and the guidance of entrepreneurial role models. The entrepreneurial drive is the initial catalyst that enables university students to recognize entrepreneurial opportunities, inspiring them to actively explore these chances.

Extensive research continually confirms a strong correlation between entrepreneurial drive and entrepreneurial behavior in academia. The level of motivation directly determines the willingness, behavior, ability, and effort of entrepreneurs in entrepreneurial activities, thereby indirectly influencing entrepreneurial outcomes (Shane and Venkataraman, 2000 ). Endogenous entrepreneurial drive is a crucial factor in recognizing entrepreneurial opportunities, as university students’ internal drive for entrepreneurship stems from their pursuit of personal ideals and life enrichment. Strong beliefs in entrepreneurship and a desire to create personal value motivate university students to proactively seek entrepreneurial opportunities.

Moreover, over 80% of the respondents highlight the significant impact of exogenous entrepreneurial drive on their identification of entrepreneurial opportunities. The majority of the interviewees aspire to attain financial independence through entrepreneurship, while some perceive it as a means to mitigate employment pressures. Additionally, entrepreneurial role models serve as guiding influences. Inspired by these role models, university student entrepreneurs can uncover more entrepreneurial opportunities (Cheng and Luo, 2022 ).

Environmental insight

Environmental insight reflects the ability of university student entrepreneurs to keenly explore and analyze industry trends and environmental changes, laying the foundation for them to deeply understand and effectively seize entrepreneurial opportunities. Through selective coding, environmental insight can be categorized into three components: alertness, insight, and policy awareness.

Alertness refers to the ability of university student entrepreneurs to perceive market and societal changes with acuity, a critical element in searching for new information and identifying business opportunities (Zhang and Wang, 2019 ). Entrepreneurs with a high level of alertness can fully engage with the information flows, maintain sensitivity to market imbalances, recognize the interconnectivity of information, and effectively translate internal insights into external business opportunities. As a result, individuals who have a higher level of alertness are more likely to unearth opportunities compared to their peers with lower alertness levels. University student entrepreneurs enhance their ability to recognize entrepreneurial opportunities by remaining alert to external information, enabling them to capture valuable information often overlooked in the complex and ever-changing market environment (Westhead et al. 2009 ).

Insight significantly enhances the understanding of university student entrepreneurs when initiating and pursuing new ideas. By carefully observing and analysing markets, competitors, and consumers, insights enable entrepreneurs to discern market essence and developmental trends. This, in turn, facilitates better exploration of potential entrepreneurial opportunities and market gaps while effectively mitigating risks and challenges within the environment.

Policy awareness pertains to understanding and comprehending government policies and regulations. University student entrepreneurs should have a comprehensive understanding of support policies for entrepreneurship and innovative sectors, as well as the applicable rules and regulations. This knowledge enhances their understanding of the entrepreneurial environment and policy directions, thereby helping them identify entrepreneurial opportunities. By utilizing their knowledge of national macro-policies and industry development trends, university student entrepreneurs can gain profound insights into their entrepreneurial projects’ opportunities, challenges, and policy implications for. This capability enables them to promptly identify opportunities within the external environment (Deng et al. 2011 ).

Explicit capabilities

Explicit capabilities, such as knowledge and skills necessary for university entrepreneurs to perceive entrepreneurial opportunities, can be likened to icebergs visible on the water’s surface—readily observable and measurable, and amenable to enhancement through structured training programs.

Learning abilities

University students can enhance their internal learning qualities, such as learning efficiency and attitude, by pursuing higher education. This enables them to obtain external learning aspects, such as knowledge and skills (Xu, 2016 ). An individual’s learning ability can effectively expedite the speed at which they identify entrepreneurial opportunities (Shane and Venkataraman, 2000 ). Therefore, the ability to learn plays a crucial role for university student entrepreneurs in recognizing entrepreneurial opportunities. The coding process reveals that the learning ability of university student entrepreneurs primarily manifests in two aspects: entrepreneurial learning and learning from failures.

Entrepreneurial learning is a critical avenue for university students to acquire knowledge about entrepreneurship, as it provides them with a cognitive framework that positively contributes to the identification of opportunities (Wang, 2019 ). Amidst the dynamic landscape of entrepreneurship, respondents widely agree that “university students must learn new ideas and technologies to adapt to the constant impact of the internet wave.” During the entrepreneurial process, university student entrepreneurs utilize entrepreneurial learning to gather industry, technical, and financial information, enabling them to quickly assess the feasibility and profitability of entrepreneurial opportunities (Luo and Zhang, 2023 ). Their capacity to acquire knowledge about entrepreneurship directly correlates with their ability to recognize exceptional entrepreneurial prospects.

Entrepreneurial activities take place in an environment of uncertainty and involve an ongoing cycle of experimentation, with failure being a common occurrence. University students lacking prior entrepreneurial experience have a poor likelihood of reaching immediate success. Failure situations contain valuable information that is not easily recognizable. Learning from failure is a dynamic process in which entrepreneurs engage in cognitive reflection, linkage, and application during entrepreneurial practice. Additionally, it serves as the fundamental basis for identifying opportunities (Yu et al. 2019 ). University student entrepreneurs can expand their scope for exploring new entrepreneurial opportunities by learning from failure (Yu et al. 2016 ). However, learning is not an inevitable outcome of failure. Whether university student entrepreneurs “regroup” or “become discouraged” after experiencing entrepreneurial failure depends on their perception of failure. University student entrepreneurs afraid of failure tend to focus more on the negative consequences of entrepreneurial failure, leading to a pessimistic view of entrepreneurial opportunities, which is not conducive to engaging in subsequent entrepreneurial activities (Leon and Saies, 2018 ). After experiencing the psychological and economic pressures of entrepreneurial failure, university student entrepreneurs often become ensnared in a cycle of negative emotions, facing setbacks or even abandoning their entrepreneurial pursuits. Conversely, individuals with strong emotional resilience can swiftly rebound from failure and actively participate in entrepreneurial endeavors to pursue new opportunities.

Furthermore, engaging in introspection on failure is a crucial aspect of learning from failure for university student entrepreneurs. They reassess their previous entrepreneurial endeavors, draw lessons from them, and acquire knowledge that is often more valuable and harder to grasp than what can be learned from entrepreneurial success. This process helps them identify new entrepreneurial opportunities.

Networking abilities

Networking capability refers to an entrepreneur’s ability to identify, develop, maintain, and leverage the value of personal network relationships to access information and resources (Xiang et al. 2018 ). One of the main challenges university students face in entrepreneurship is their limited knowledge and resources, which often hinder their entry into the entrepreneurial arena. Based on interview data analysis, the networking capability of university student entrepreneurs comprises two essential components: network-building capability and network management capability. Network-building capability underscores university student entrepreneurs’ belief in the significance of social networks and their proactive use of relationship-building skills to expand their networks. They build contacts with partners, industry experts, investors, and other relevant individuals to gather information on market demands, product innovation, and business opportunities, effectively facilitating the identification of entrepreneurial opportunities. These network relationships established by university student entrepreneurs are cultivated during the entrepreneurial process or earlier stages, such as learning, socializing, or internships. Maintaining and managing these relationships are crucial during the entrepreneurial journey.

Network management capability empowers university student entrepreneurs who possess proficient networking skills to foster interaction and knowledge exchange among network members through effective communication, cooperation, and knowledge sharing (Ma et al. 2022 ). This capability allows them to access diverse knowledge sources within their networks and identify additional entrepreneurial opportunities. Through their networking capabilities, university student entrepreneurs can establish and nurture their own networks of relationships, fostering an environment of trust and collaboration. This allows them to effectively coordinate and manage various relationships (Man et al. 2002 ), access and acquire additional external resources, and improve their perception of business opportunities (Maurer and Ebers, 2006 ).

Integration abilities

University student entrepreneurs acquire knowledge through their own learning abilities and gain additional information and resources through their networking aptitude. However, the knowledge acquired and the information and resources gathered from social networks often exhibit scattered and redundant characteristics. Mere possession of this knowledge and these resources does not create value or identify market opportunities. Effectively integrating limited knowledge and resources becomes the key to grasping and realizing the value of entrepreneurial opportunities, necessitating university student entrepreneurs to possess the capability to integrate. Based on the analysis of interview data, it is evident that university student entrepreneurs possess the ability to integrate knowledge and resources.

The knowledge integration capability reflects the ability of university student entrepreneurs to synthesize diverse knowledge sources. Vertically, it closely links newly acquired knowledge with existing knowledge, expedites the internalization process (Yu et al. 2019 ), and provides a knowledge-based advantage for recognizing entrepreneurial opportunities. Horizontally, it entails the integration of knowledge from other disciplines and the exploration of possible synergies between domain-specific knowledge and external knowledge. Such integration is instrumental in identifying novel opportunities and exploring new product markets (Suarez et al. 2018 ).

University student entrepreneurs skilled in knowledge integration effectively assimilate information from external networks, merging it with their own insights. This process leads to the generation of new concepts, ideas, and creativity, ultimately creating new entrepreneurial opportunities (Zhang and Sun, 2017 ). The resource integration capability, on the other hand, reflects the ability of university student entrepreneurs to integrate and creatively combine existing resources in flexible ways, enabling fragmented resources to generate a synergistic effect where “1 + 1 > 2”. This approach aligns with market demands and specific resource requirements, enabling them to effectively identify and capitalize on opportunities (Sun et al. 2021 ). University student entrepreneurs enhance their overall entrepreneurial outcomes by optimizing resource usage through maximizing current resources and reducing exclusive dependence on external sources (Hota et al. 2019 ), facilitating their engagement in innovative activities and enhancing their overall entrepreneurial outcomes.

Conclusions and contributions

Conclusions.

This study utilized grounded theory to delineate specific dimensions of university students’ entrepreneurial opportunity identification capability and to explore their interrelationships. A structural model of university students’ entrepreneurial opportunity identification capability was constructed, shedding light on the “black box” of the unknown aspects of entrepreneurial opportunity identification capability.

The entrepreneurial opportunity identification capability of university students refers to their aptitude for efficiently organizing and utilizing appropriate knowledge and technologies in identifying entrepreneurial opportunities amidst environmental changes. This capability encompasses both intrinsic and implicit abilities, such as entrepreneurial drive and environmental insight, which are crucial in how university students identify opportunities. It also includes external explicit abilities, such as learning, building connections, and integrating knowledge, which can be observed, measured, and improved.

Contributions

This study makes significant theoretical contributions by introducing the concept of university students’ entrepreneurial opportunity identification capability through grounded theory. This study specifically addresses a gap in research on entrepreneurial opportunities by examining university student entrepreneurs as a distinct and separate category. By operationalizing general entrepreneurial opportunity identification capability, the study offers a new conceptual framework that integrates cognitive and process perspectives, elucidating the nature of each element and its interrelationships. The study aims to reveal the specific mechanisms involved in the ability to identify entrepreneurial opportunities. This is achieved by conducting in-depth interviews with prominent university student entrepreneurs in China in order to extract conceptual meanings and connections.

Moreover, this study holds significant practical contributions. Universities can gain important insights by evaluating the structural model of entrepreneurial opportunity recognition ability among university students. This analysis offers valuable insights into current abilities and the development of specific sub-competencies. This understanding enables universities to implement practical and effective measures aimed at enhancing entrepreneurial opportunity recognition skills, thereby driving innovation and reform in entrepreneurship education and improving the quality of entrepreneurial talent cultivation. On an individual level, understanding the concept and structure of university entrepreneurial opportunity recognition ability assists students in accurately assessing their recognition skills, strategically planning, and developing relevant competencies. This enhancement can lead to higher quality entrepreneurship endeavors, increased success rates, and the realization of individuals’ full potential.

Limitations

This study requires further in-depth exploration, which can be pursued in several domains. Firstly, regarding the research sample, the study relies on limited primary data, potentially resulting in an incomplete extraction of the elements comprising college students’ entrepreneurial opportunity recognition ability. Future studies could encompass a broader range of college students to comprehensively summarize these elements. Secondly, the limited availability of time and resources has prevented the thorough examination of the university student’s entrepreneurial opportunity recognition ability model on a substantial dataset. Therefore, in the future, developing a measurement scale to assess the entrepreneurial opportunity recognition ability of university students could facilitate validating the scientific robustness of this ability structure through quantitative research methods.

Data availability

The data utilized in this study are proprietary to Jiangsu University and subject to confidentiality restrictions. Regrettably, we are unable to publicly share the data in its entirety. However, if readers have a specific interest or reasonable need to access the data, we encourage them to contact the corresponding author for further details and potential arrangements.

Ardichvili A, Cardozo R, Ray S (2003) A theory of entrepreneurial opportunity identification and development. J. Bus. venturing 18(1):105–123

Article   Google Scholar  

Baron JA, Cole BF, Sandler RS, et al. (2003) A randomized trial of aspirin to prevent colorectal adenomas. New England Journal of Medicine, (10): 891–899

Baugher JE, Roberts JT (1999) Perceptions and worry about hazards at work: Unions, contract maintenance, and job control in the US petrochemical industry. Industrial Relations: A Journal of Economy and Society, (4): 522–541

Chen X (1999) The Ideas and Methods of Grounded Theory. Educational Research and Experiment, (04), 58–63

Chen M, Yang Y (2009) Typology and performance of new ventures in Taiwan. Int. J. Entrepreneurial Behav. Res. 15(5):398–414

Article   CAS   Google Scholar  

Cheng J, Luo J (2022) How Does Entrepreneurial Human Capital Activate Opportunity Entrepreneurship?—A Moderated Mediation Model. Sci. Sci. Manag. S. T. 43(06):110–122

Google Scholar  

Corbett A (2005) Universities and the Europe of knowledge: ideas, institutions and policy entrepreneurship in European Union higher education policy,1955–2005. Basingstoke: Palgrave Macmillan

Davidsson P (2015) Entrepreneurial opportunities and the entrepreneurship nexus: A re-conceptualization. J. Bus. venturing 30(5):674–695

Deng S, Jiao H, Feng Z (2011) Research on the Process Mechanism of Corporate Strategic Transformation in Complex Dynamic Environments. Sci. Res. Manag. 88(1):60–67

Fang Y, Li J, Si Y (2024) Policy tool spectrum model and policy implication of the field manager system for arable land protection-an empirical analysis of policy texts based on rootedness theory. China Land Sci. 38(01):94–104

Glaser BG, Strauss AL, Strutzel E (1968) The discovery of grounded theory: strategies for qualitative research. Nurs. Res. 17(4):364

Hansen DJ, Lumpkin GT, Hills GE (2011) A multidimensional examination of a creativity-based opportunity recognition model. Int. J. Entrepreneurial Behav. Res. 17(5):515–533

Hills GE, Shrader RC, Lumpkin GT (1999) Opportunity recognition as a creative process. Front. entrepreneurship Res. 19(19):216–227

Hota PK, Mitra S, Qureshi I (2019) Adopting bricolage to overcome resource constraints: The case of social enterprises in rural India. Manag. Organ. Rev. 15(2):371–402

Kaish S, Gilad B (1991) Characteristics of opportunities search of entrepreneurs versus executives: Sources, interests, general alertness. J. Bus. venturing 6(1):45–61

Kirzner, I (1997). Entrepreneurial discovery and the competitive market process: an Austrian approach. Journal of Economic Literature, (35): 60–85

Leon N, Saies JA (2018) Motivated but not starting: How fear of failure impacts entrepreneurial intentions. Small Enterp. Res. 25(2):1–16

Long WA, McMullan WE (1984) Mapping the new venture opportunity identification process. University of Calgary, Faculty of Management, 252–256

Lu Q (2019) The Realistic Dilemmas and Breakthrough Paths of College Students’ Entrepreneurship. Contemporary Youth Research, (03), 90–95

Lumpkin GT, & Lichtenstein BB (2005) The role of organizational learning in the opportunity–recognition process. Entrepreneurship theory and practice, (4): 451–472

Luo X, Zhang X (2023) The Impact of Entrepreneurial Learning on Entrepreneurial Opportunity Development. Contemporary Economic Research, (03), 109–115

Ma H, Sun Q, Wu J (2022) Network Market Orientation, Opportunity Capability, and New Venture Performance—An Interaction Model. J. Jilin Univ. (Soc. Sci. Ed.) 62(01):138–151+237-238

Man TWY, Lau T, Chan KF (2002) The competitiveness of small and medium enterprises: A conceptualization with focus on entrepreneurial competencies. J. Bus. venturing 17(2):123–142

Maurer I, Ebers M (2006) Dynamics of social capital and their performance implications: Lessons from biotechnology start-ups. Adm. Sci. Q. 51(2):262–292

McClelland DC (1973) Testing for competence rather than for “intelligence”. Am. psychologist 28(1):1–14

Mycos McKesson Research Institute (2022) China’s higher vocational students’ employment report. Social Science Literature Publishing House, Beijing

Shane S, Venkataraman S (2000) The promise of entrepreneurship as a field of research. Acad. Manag. Rev. 25(1):217–226

Smith BR, Matthews CH, Schenkel MT (2009) Differences in entrepreneurial opportunities: The role of tacitness and codification in opportunity identification. Journal of small business management, (1): 38–57

Suarez FF, Utterback J, Von Gruben P et al. (2018) The hybrid trap: Why most efforts to bridge old and new technology miss the mark. Sloan Manag. Rev. 59(3):52–57

Suddaby R, Bruton GD, Si SX (2015) Entrepreneurship through a qualitative lens: Insights on the construction and/or discovery of entrepreneurial opportunity. Journal of Business venturing, (1): 1–10

Sun Y, Sun H, Ding Y (2021) Resource Matching and Entrepreneurial Opportunity Recognition: Based on the Theory of Resource Arrangement. Sci. Technol. Prog. Countermeasures 38(02):19–28

Tang C, Niu C, Shi Y (2021) Research on the impact mechanism of failure learning on enterprise performance. Sci. Technol. Prog. Countermeasures 38(22):141–150

Wang C, Chen M (2002) Characterization of managerial competence:A test of structural equation modeling. Psychological Sci. 05:513–516

Wang F, Yao, G (2014) Research on the Enhancement of College Students’ Entrepreneurial Opportunity Recognition Capability. Journal of National Academy of Education Administration, (08), 57–60

Wang J (2015) The process of entrepreneurial opportunity identification among Chinese college students during their school years-a study based on the rootedness theory. China Human Resource Development, (18):86–93

Wang J (2019) Research on the Mechanism of Innovative Entrepreneurship Education Enhancing College Students’ Opportunity Recognition Ability. Technoeconomics & Management Research, (08), 32–38

Wang J, Zhang D (2017) Exploring the Path of Transforming Prior Knowledge into Entrepreneurial Opportunity Recognition Capability: A Study Based on Grounded Theory. Res. Dev. Manag. 29(03):21–30

Wang B (2022) Chinese Undergraduate Employment Report. Social Sciences Academic Press, Beijing

Westhead P, Ucbasaran D, Wright M (2009) Information search and opportunity identification: the importance of prior business ownership experience. Int. Small Bus. J. 27(6):659–680

Xiang G, Pan K, Zhang W (2018) Network Relationships, Entrepreneurial Opportunity Identification, and Entrepreneurial Decision-Making: An Empirical Study of Zhejiang New Start-up Enterprises. Sci. Technol. Manag. Res. 38(22):169–177

Xu D (2016) Research on the Evaluation Index System of College Students’ Learning Ability Achievement. Journal of the National Academy of Education Administration, (12), 66–71

Yang J, Duan M, Xu B (2014) The effect of cultural differences on innovation opportunity identification ability - based on social network perspective. Sci. Technol. Prog. Countermeasures 31(19):6–9

Yin M, Cai L (2012) An Analysis of the Current Research Status and Future Prospects of Entrepreneurial Capability. Foreign Econ. Manag. 34(12):1–11

Yu F, Liu M, Wang L et al. (2019) The Impact Mechanism of Knowledge Coupling on Green Innovation in Manufacturing Enterprises: The Moderating Role of Redundant Resources. Nankai Bus. Rev. 22(03):54–65+76

Yu X, Hu Z, Chen Y et al. (2016) Discovering Business Opportunities from Failures: The Role of Psychological Safety and Voice Behavior. Manag. Rev. 28(7):154–165

Yu X, Tao X, Li Y (2019) Seeing the Details? Failure Learning, Opportunity Recognition, and New Product Development Performance. J. Manag. Eng. 33(1):51–59

Zhang H, Sun X (2017) The Impact of Entrepreneur’s External Social Capital on Entrepreneurial Opportunity Recognition from the Perspective of Network Embedding. Stud. Sci. Sci. 38(12):133–147

CAS   Google Scholar  

Zhang X, Sun Z (2012) Analysis of entrepreneurial opportunity identification mechanism. Yunnan Social Science, (4): 94–97

Zhang X, Wang C (2019) Entrepreneurial Alertness, Entrepreneurial Opportunity Recognition, and Entrepreneurial Success. J. Soochow Univ. (Philos. Soc. Sci. Ed.) 40(02):99–108

Zhao X, Liu X, Yu Y et al. (2023) Cross-boundary Leadership of Higher-order Teams and Its Theoretical Model Construction-An Exploration Based on Classical Rooting Theory. J. Xihua Univ. (Philos. Soc. Sci. Ed.) 42(02):72–85

Zhu J, Zou L (2016) Research on factors influencing entrepreneurship on the identification of entrepreneurial opportunities of returnees. Science and Technology Progress and Countermeasures, (17):125–130

Download references

Acknowledgements

This work was supported by the General Project of Humanities and Social Sciences, Ministry of Education of China (18YJA880079).

Author information

Authors and affiliations.

School of Management, Jiangsu University, Zhenjiang, China

School of Teacher Education, Jiangsu University, Zhenjiang, China

Xue Shuangyan

You can also search for this author in PubMed   Google Scholar

Wang Fei and Xue Shuangyan contributed to the study’s conception and design. Material preparation, data collectionand analysis were performed by Wang Fei and Xue Shuangyan. The first draft of the manuscript was written by Wang Fei and Xue Shuangyan. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Xue Shuangyan .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical statements

Approval was obtained from the ethics committee of The Evidence-based Research Center for Educational Assessment (ERCEA) Research Ethical Review Board at Jiangsu University. The procedures used in this study adhere to the tenets of the Declaration of Helsinki. The ethical approval number of this study is ERCEA2306.

Informed consent

Written consent was obtained from all participants within two weeks before the experiment began.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Documentary evidence, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Fei, W., Shuangyan, X. Structural model and characteristics of entrepreneurial opportunity recognition abilities among university students in China: a grounded theory approach. Humanit Soc Sci Commun 11 , 1166 (2024). https://doi.org/10.1057/s41599-024-03699-7

Download citation

Received : 02 October 2023

Accepted : 30 August 2024

Published : 09 September 2024

DOI : https://doi.org/10.1057/s41599-024-03699-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

model of experiment

This week: the arXiv Accessibility Forum

Help | Advanced Search

High Energy Physics - Theory

Title: auxiliary field deformations of (semi-)symmetric space sigma models.

Abstract: We generalize the auxiliary field deformations of the principal chiral model (PCM) introduced in arXiv:2405.05899 and arXiv:2407.16338 to sigma models whose target manifolds are symmetric or semi-symmetric spaces, including a Wess-Zumino term in the latter case. This gives rise to a new infinite family of classically integrable $\mathbb{Z}_2$ and $\mathbb{Z}_4$ coset models of the form which are of interest in applications of integrability to worldsheet string theory and holography. We demonstrate that every theory in this infinite class admits a zero-curvature representation for its equations of motion by exhibiting a Lax connection.
Comments: 52 pages
Subjects: High Energy Physics - Theory (hep-th); General Relativity and Quantum Cosmology (gr-qc); Mathematical Physics (math-ph); Exactly Solvable and Integrable Systems (nlin.SI)
Cite as: [hep-th]
  (or [hep-th] for this version)
  Focus to learn more arXiv-issued DOI via DataCite (pending registration)

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • INSPIRE HEP
  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

COMMENTS

  1. Design of experiments

    Design of experiments

  2. Guide to Experimental Design

    Guide to Experimental Design | Overview, 5 steps ... - Scribbr

  3. Scientific modelling

    Scientific modelling

  4. Rutherford model

    Rutherford model | Definition, Description, Image ...

  5. 1.2 The Scientific Methods

    Models put the intangible or the extremely complex into human terms that we can visualize, discuss, and hypothesize about. Scientific models are constructed based on the results of previous experiments. Even still, models often only describe a phenomenon partially or in a few limited situations.

  6. Models in Science

    Models are of central importance in many scientific contexts. The centrality of models such as inflationary models in cosmology, general-circulation models of the global climate, the double-helix model of DNA, evolutionary models in biology, agent-based models in the social sciences, and general-equilibrium models of markets in their respective domains is a case in point (the Other Internet ...

  7. Model Experiment

    Model Experiment - an overview

  8. Model Experiments and Models in Experiments

    (PDF) Model Experiments and Models in Experiments

  9. Model laboratories: A quick-start guide for design of simulation

    Extreme condition tests are one of the most important experiments to consider during the model evaluation and testing stage because a) it is a powerful experiment for uncovering flaws in the model and b) it enhances model utility for analyzing how a system operates outside its normal region (Forrester and Senge 1980; Martinez-Monyano and ...

  10. Models are experiments, experiments are models

    An experiment is an arrangement seeking to isolate a fragment of the world by controlling for causally relevant things outside that fragment. It is suggested that many theoretical models are ('thought') experiments, and that many ordinary experiments are ('material') models.

  11. Theories and Models: What They Are, What They Are for, and What They

    Full article: Theories and Models: What They Are, What ...

  12. Steps of the Scientific Method

    Steps of the Scientific Method

  13. Model-based design of experiments for parameter precision: State of the

    The overall model-building strategy (based on the schemes reported in Asprey and Macchietto, 2000 and Galvanin et al., 2007) with reference to sequential, parallel and parallel-sequential approaches to model-based design of experiments for parameter precision. There are three main approaches to an iterative DOE.

  14. PDF A Brief Introduction to Design of Experiments

    A Brief Introduction to Design of Experiments Jacqueline K. Telford esign of experiments is a series of tests in which purposeful changes are made to the input variables of a system or pro-cess and the effects on response variables are measured. Design of experiments is applicable to both physical processes and computer simulation models.

  15. 5.1.1. What is experimental design?

    5.1.1. What is experimental design?

  16. Design of Experiments

    Restricted vs Unrestricted Mixed Model Design of Experiments with Fixed and Random Factors . ... Design and Analysis of Experiments, Advanced Experimental Design (Volume 2). John Wiley & Sons. ISBN-13: 978-0471551775; ISBN-10: 0471551775. Montgomery, D. C. (2012).

  17. The Basics of an Experiment

    The Basics of an Experiment

  18. Chapter 1 Principles of Experimental Design

    1.3 The Language of Experimental Design. By an experiment we understand an investigation where the researcher has full control over selecting and altering the experimental conditions of interest, and we only consider investigations of this type. The selected experimental conditions are called treatments.An experiment is comparative if the responses to several treatments are to be compared or ...

  19. PDF Design of Experiments: Part 1

    • Large blocks of robustness experiments had been planned at outset of the design work • More than 50% were not finished • Reasons given - Unforseen changes - Resource pressure -Satisficing "Well, in the third experiment, we found a solution that met all our needs, so we cancelled the rest of the experiments and moved on

  20. Rutherford scattering experiments

    Rutherford scattering experiments

  21. Rutherford Scattering

    Rutherford Scattering - Atomic Nuclei

  22. Kolb's Learning Styles & Experiential Learning Cycle

    Kolb's Learning Styles & Experiential Learning Cycle

  23. Rutherford Atomic Model Observations and Limitations In Detail

    Rutherford Atomic Model and Limitations

  24. Experiment and constitutive modelling of creep deformation in the

    Building upon the Nishihara model, we considered the influence of shear stress and time on the viscoelastic viscosity coefficient and introduced a damage factor to the viscoplasticity. ... Zhao YL, Cao P, Wen YJ, et al. (2008) Elastovisco-plastic rheological experiment and nonlinear rheological model of rocks. CJRME. 27(03): 477-486. (In ...

  25. Structural model and characteristics of entrepreneurial opportunity

    A structural model was subsequently developed based on grounded theory, with data coded at three levels and tested for theoretical saturation. ... This model can be used as a benchmark for ...

  26. Adult Learning Theories: Definition and Examples

    An adult learning theory provides a framework for how adults process and retain new information. At the heart of adult learning theories is the concept that adults learn differently than children do, and therefore they can benefit from specialized instruction designed specifically for adults. ... Within the andragogy model, there are six ...

  27. The integrated behavioral model of mental health help seeking (IBM-HS

    This article introduces the integrated behavioral model of mental health help seeking (IBM-HS), a theoretical model for understanding the constructs (e.g., systemic, predisposing, and enabling factors; mental health literacy; illness perceptions; perceived need; stigma; shame; perceived benefits, motivation) that influence people's decision making around seeking professional mental health ...

  28. Using an extended technology acceptance model to understand the

    Pramudito DK, Nuryana A, Assery S, et al. (2023) Application of unified theory of acceptance, use of technology model and Delone & Mclean success model to analyze use behavior in mobile commerce applications. Jurnal Informasi Dan Teknologi 5(3): 1-6.

  29. Auxiliary Field Deformations of (Semi-)Symmetric Space Sigma Models

    We generalize the auxiliary field deformations of the principal chiral model (PCM) introduced in arXiv:2405.05899 and arXiv:2407.16338 to sigma models whose target manifolds are symmetric or semi-symmetric spaces, including a Wess-Zumino term in the latter case. This gives rise to a new infinite family of classically integrable $\\mathbb{Z}_2$ and $\\mathbb{Z}_4$ coset models of the form which ...