Optional Lab Activities

Lab objectives.

At the conclusion of the lab, the student should be able to:

  • define the following terms: metabolism, reactant, product, substrate, enzyme, denature
  • describe what the active site of an enzyme is (be sure to include information regarding the relationship of the active site to the substrate)
  • describe the specific action of the enzyme catalase, include the substrate and products of the reaction
  • list what organelle catalase can be found in every plant or animal cell
  • list the factors that can affect the rate of a chemical reaction and enzyme activity
  • explain why enzymes have an optimal pH and temperature to ensure greatest activity (greatest functioning) of the enzyme (be sure to consider how virtually all enzymes are proteins and the impact that temperature and pH may have on protein function)
  • explain why the same type of chemical reaction performed at different temperatures revealed different results/enzyme activity
  • explain why warm temperatures (but not boiling) typically promote enzyme activity but cold temperature typically
  • decreases enzyme activity
  • explain why increasing enzyme concentration promotes enzyme activity
  • explain why the optimal pH of a particular enzyme promotes its activity
  • if given the optimal conditions for a particular enzyme, indicate which experimental conditions using that particular enzyme would show the greatest and least enzyme activity

Introduction

Hydrogen peroxide is a toxic product of many chemical reactions that occur in living things. Although it is produced in small amounts, living things must detoxify this compound and break down hydrogen peroxide into water and oxygen, two non-harmful molecules. The organelle responsible for destroying hydrogen peroxide is the peroxisome using the enzyme catalase. Both plants and animals have peroxisomes with catalase. The catalase sample for today’s lab will be from a potato.

Enzymes speed the rate of chemical reactions. A catalyst is a chemical involved in, but not consumed in, a chemical reaction. Enzymes are proteins that catalyze biochemical reactions by lowering the activation energy necessary to break the chemical bonds in reactants and form new chemical bonds in the products. Catalysts bring reactants closer together in the appropriate orientation and weaken bonds, increasing the reaction rate. Without enzymes, chemical reactions would occur too slowly to sustain life.

The functionality of an enzyme is determined by the shape of the enzyme. The area in which bonds of the reactant(s) are broken is known as the active site. The reactants of enzyme catalyzed reactions are called substrates. The active site of an enzyme recognizes, confines, and orients the substrate in a particular direction.

Enzymes are substrate specific, meaning that they catalyze only specific reactions. For example, proteases (enzymes that break peptide bonds in proteins) will not work on starch (which is broken down by the enzyme amylase). Notice that both of these enzymes end in the suffix -ase. This suffix indicates that a molecule is an enzyme.

Environmental factors may affect the ability of enzymes to function. You will design a set of experiments to examine the effects of temperature, pH, and substrate concentration on the ability of enzymes to catalyze chemical reactions. In particular, you will be examining the effects of these environmental factors on the ability of catalase to convert H 2 O 2 into H 2 O and O 2 .

The Scientific Method

As scientists, biologists apply the scientific method. Science is not simply a list of facts, but is an approach to understanding the world around us. It is use of the scientific method that differentiates science from other fields of study that attempt to improve our understanding of the world.

The scientific method is a systematic approach to problem solving. Although some argue that there is not one single scientific method, but a variety of methods; each of these approaches, whether explicit or not, tend to incorporate a few fundamental steps: observing, questioning, hypothesizing, predicting, testing, and interpreting results of the test. Sometimes the distinction between these steps is not always clear. This is particularly the case with hypotheses and predictions. But for our purposes, we will differentiate each of these steps in our applications of the scientific method.

You are already familiar with the steps of the scientific method from previous lab experiences. You will need to use your scientific method knowledge in today’s lab in creating hypotheses for each experiment, devising a protocol to test your hypothesis, and analyzing the results. Within the experimentation process it will be important to identify the independent variable, the dependent variable, and standardized variables for each experiment.

Part 1: Observe the Effects of Catalase

  • Obtain two test tubes and label one as A and one as B.
  • Use your ruler to measure and mark on each test tube 1 cm from the bottom.
  • Fill each of two test tubes with catalase (from the potato) to the 1 cm mark
  • Add 10 drops of hydrogen peroxide to the tube marked A.
  • Add 10 drops of distilled water to the tube marked B.
  • Bubbling height tube A
  • Bubbling height tube B
  • What happened when H 2 O 2 was added to the potato in test tube A?
  • What caused this to happen?
  • What happened in test tube B?
  • What was the purpose of the water in tube B?

Part 2: Effects of pH, Temperature, and Substrate Concentration

Observations.

From the introduction and your reading, you have some background knowledge on enzyme structure and function. You also just observed the effects of catalase on the reaction in which hydrogen peroxide breaks down into water and oxygen.

From the objectives of this lab, our questions are as follows:

  • How does temperature affect the ability of enzymes to catalyze chemical reactions?
  • How does pH affect the ability of enzymes to catalyze chemical reactions?
  • What is the effect of substrate concentration on the rate of enzyme catalyzed reactions?

Based on the questions above, come up with some possible hypotheses. These should be general, not specific, statements that are possible answers to your questions.

  • Temperature hypothesis
  • pH hypothesis
  • Substrate concentration hypothesis

Test Your Hypotheses

Based on your hypotheses, design a set of experiments to test your hypotheses. Use your original experiment to shape your ideas. You have the following materials available:

  • Catalase (from potato)
  • Hydrogen peroxide
  • Distilled water
  • Hot plate (for boiling water)
  • Acidic pH solution
  • Basic pH solution
  • Thermometer
  • Ruler and wax pencil

Write your procedure to test each hypothesis. You should have three procedures, one for each hypothesis. Make sure your instructor checks your procedures before you continue.

  • Procedure 1: Temperature
  • Procedure 2: pH
  • Procedure 3: Concentration

Record your results—you may want to draw tables. Also record any observations you make. Interpret your results to draw conclusions.

  • Do your results match your hypothesis for each experiment?
  • Do the results reject or fail to reject your hypothesis and why?
  • What might explain your results? If your results are different from your hypothesis, why might they differ? If the results matched your predictions, hypothesize some mechanisms behind what you have observed.

Communicating Your Findings

Scientists generally communicate their research findings in written reports. Save the things that you have done above. You will be use them to write a lab report a little later in the course.

Sections of a Lab Report

  • Title Page:  The title describes the focus of the research. The title page should also include the student’s name, the lab instructor’s name, and the lab section.
  • Introduction:  The introduction provides the reader with background information about the problem and provides the rationale for conducting the research. The introduction should incorporate and cite outside sources. You should avoid using websites and encyclopedias for this background information. The introduction should start with more broad and general statements that frame the research and become more specific, clearly stating your hypotheses near the end.
  • Methods:  The methods section describes how the study was designed to test your hypotheses. This section should provide enough detail for someone to repeat your study. This section explains what you did. It should not be a bullet list of steps and materials used; nor should it read like a recipe that the reader is to follow. Typically this section is written in first person past tense in paragraph form since you conducted the experiment.
  • Results:  This section provides a written description of the data in paragraph form. What was the most reaction? The least reaction? This section should also include numbered graphs or tables with descriptive titles. The objective is to present the data, not interpret the data. Do not discuss why something occurred, just state what occurred.
  • Discussion:  In this section you interpret and critically evaluate your results. Generally, this section begins by reviewing your hypotheses and whether your data support your hypotheses. In describing conclusions that can be drawn from your research, it is important to include outside studies that help clarify your results. You should cite outside resources. What is most important about the research? What is the take-home message? The discussion section also includes ideas for further research and talks about potential sources of error. What could you improve if you conducted this experiment a second time?
  • Biology 101 Labs. Authored by : Lynette Hauser. Provided by : Tidewater Community College. Located at : http://www.tcc.edu/ . License : CC BY: Attribution
  • BIOL 160 - General Biology with Lab. Authored by : Scott Rollins. Provided by : Open Course Library. Located at : http://opencourselibrary.org/biol-160-general-biology-with-lab/ . License : CC BY: Attribution

Laboratory Manual For SCI103 Biology I at Roxbury Community College

Enzymes are macromolecular biological catalysts. The molecules upon which enzymes may act are called substrates and the enzyme converts the substrates into different molecules known as products. Almost all metabolic processes in the cell need enzyme catalysis in order to occur at rates fast enough to sustain life. Metabolic pathways depend upon enzymes to catalyze individual steps. Enzymes are known to catalyze more than 5,000 biochemical reaction types. Most enzymes are proteins, although a few are catalytic RNA molecules. The latter are called ribozymes. Enzymes’ specificity comes from their unique three-dimensional structures.

Like all catalysts, enzymes increase the reaction rate by lowering its activation energy. Some enzymes can make their conversion of substrate to product occur many millions of times faster. An extreme example is orotidine 5’-phosphate decarboxylase, which allows a reaction that would otherwise take millions of years to occur in milliseconds. Chemically, enzymes are like any catalyst and are not consumed in chemical reactions, nor do they alter the equilibrium of a reaction. Enzymes differ from most other catalysts by being much more specific. Enzyme activity can be affected by other molecules: inhibitors are molecules that decrease enzyme activity, and activators are molecules that increase activity. Many therapeutic drugs and poisons are enzyme inhibitors. An enzyme’s activity decreases markedly outside its optimal temperature and pH.

Some enzymes are used commercially, for example, in the synthesis of antibiotics. Some household products use enzymes to speed up chemical reactions: enzymes in biological washing powders break down protein, starch or fat stains on clothes, and enzymes in meat tenderizer break down proteins into smaller molecules, making the meat easier to chew.

In this laboratory, we will study the effect of temperature, concentration and pH and on the activity of the enzyme catalase. Catalase speeds up the following reaction:

2 H 2 O 2 -> 2 H 2 O + O 2

Hydrogen peroxide is toxic. Cells therefore use catalase to protect themselves. In these experiments, we will use catalase enzyme from potato.

The first experiment will establish that our catalase works (positive control) and that our reagents are not contaminated (negative control).

Hydrogen peroxide will not spontaneously degrade at room temperature in the absence of enzyme. When catalase is added to hydrogen peroxide, the reaction will take place and the oxygen produced will lead to the formation of bubbles in the solution. The height of the bubbles above the solution will be our measure of enzyme activity (Figure 8.2 ).

8.1 Positive and negative controls (Experiment 1)

8.1.1 experimental procedures.

  • Obtain and label three 15 ml conical plastic reaction tubes.
  • Add 1 ml of potato juice (catalase) to tube 1 (use a plastic transfer pipette).
  • Add 4 ml of hydrogen peroxide to tube 1. Swirl well to mix and wait at least 20 seconds for bubbling to develop.
  • Use a ruler (Figure 8.1 ) and measure the height of the bubble column above the liquid (in millimeters; use the centimeter scale of the ruler) and record the result in Table 8.1 .
  • Add 1 ml of water.
  • Add 4 ml of hydrogen peroxide. Swirl well to mix and wait at least 20 seconds.
  • Measure the height of the bubble column (in millimeters) and record the result in Table 8.1 .
  • Add 1 ml of potato juice (catalase).
  • Add 4 ml of sucrose solution. Swirl well to mix; wait 20 seconds.
  • Measure the height of the bubble column and record the result in Table 8.1 .
Table 8.1: Positive and negative controls.
Tube # Height of bubbles (mm)
1
2
3

A ruler with metric (cm) and imperial (inch) scales.

Figure 8.1: A ruler with metric (cm) and imperial (inch) scales.

Results from experiment 1. Compare with your results!

Figure 8.2: Results from experiment 1. Compare with your results!

8.2 Effect of temperature on enzyme activity (Experiment 2)

8.2.1 experimental procedures.

Before you begin with the actual experiment, write down in your own words the hypothesis for this experiment:

  • Obtain and label three tubes.
  • Add 1 ml of potato juice (catalase) to each tube.
  • Place tube 1 in the refrigerator, tube 2 in a 37 °C (Celsius) heat block, and tube 3 in a 97 °C heat block for 15 minutes.
  • Remove the tubes with the potato juice (catalase) from the refrigerator and heat blocks and immediately add 4 ml hydrogen peroxide to each tube.
  • Swirl well to mix and wait 20 seconds.
  • Measure the height of the bubble column (in millimeters) in each tube and record your observations in Table 8.2 .

Do the data support or contradict your hypothesis?

Table 8.2: Effect of temperature on enzyme activity.
Tube # Height of bubbles (mm)
1
2
3

Results from experiment 2. Compare with your results!

Figure 8.3: Results from experiment 2. Compare with your results!

8.3 Effect of concentration on enzyme activity (Experiment 3)

8.3.1 experimental procedures.

  • Measure the height of the bubble column (in millimeters) and record your observations in Table 8.3 .
  • Add 3 ml of potato juice (catalase).
Table 8.3: Effect of concentration on enzyme activity.
Tube # Height of bubbles (mm)
1
2
3

Results from experiment 3. Compare with your results!

Figure 8.4: Results from experiment 3. Compare with your results!

8.4 Effect of pH on enzyme activity (Experiment 4)

8.4.1 experimental procedures.

  • Obtain 6 tubes and label each tube with a number from 1 to 6.
  • Place the tubes from left (tube #1) to right (tube #6) in the first row of a test tube rack.
  • Add to 1 ml of potato juice (catalase) to each tube.
  • Add 2 ml of water to tube 1.
  • Add 2 ml of pH buffer 3 to tube 2.
  • Add 2 ml of pH buffer 5 to tube 3.
  • Add 2 ml of pH buffer 7 to tube 4.
  • Add 2 ml of pH buffer 9 to tube 5.
  • Add 2 ml of pH buffer 12 to tube 6.
  • Add 4 ml of hydrogen peroxide to each of the six tubes.
  • Swirl each tube well to mix and wait at least 20 seconds.
  • Measure the height of the bubble column (in millimeters) in each tube and record your observations in Table 8.4 .
Table 8.4: Effect of pH on enzyme activity.
Tube # Height of bubbles (mm)
1
2
3
4
5
6

Results from experiment 4. Compare with your results!

Figure 8.5: Results from experiment 4. Compare with your results!

Figure 8.6: Catalase activity is dependent on pH. The data shown in this figure were obtained by three groups of students during a previous laboratory session. The triangles represent the data from the experimental results shown in Figure 8.5 .

8.5 Cleaning up

  • Empty the contents of the plastic tubes into the labeled waste container (brown bottle) in the chemical fume hood.
  • Discard the empty tubes and other waste in the regular waste basket.
  • Rinse the glass rod and glassware with water and detergent.
  • Return the glass ware to the trays on your bench where you originally found them.

8.6 Review Questions

  • What is a catalyst?
  • What are enzymes?
  • What is the name of the enzyme that we studied in this laboratrory session?
  • What is an enzyme substrate?
  • What is the substrate of the enzyme that we used in this laborator seesion?
  • What are the products of the reaction that was catalized by the enzyme that we studied in this laboratory seesion?
  • What is the active site of an enzyme?
  • What is the purpose of the negative and positive controls?
  • State the hypothesis that was tested in experiment 2?
  • State the hypothesis that was tested in experiment 3?
  • State the hypothesis that was tested in experiment 4?
  • The enzyme from potato appeared to work better at 4 °C than at 37 °C. Would you expect the same if we had used the equivalent human enzyme? Justify your answer.
  • Why did heating the enzyme at high temperature (> 65 °C) result in loss of activity?

Stanford University

Along with Stanford news and stories, show me:

  • Student information
  • Faculty/Staff information

We want to provide announcements, events, leadership messages and resources that are relevant to you. Your selection is stored in a browser cookie which you can remove at any time using “Clear all personalization” below.

For much of human history, animals and plants were perceived to follow a different set of rules than rest of the universe. In the 18th and 19th centuries, this culminated in a belief that living organisms were infused by a non-physical energy or “life force” that allowed them to perform remarkable transformations that couldn’t be explained by conventional chemistry or physics alone.

enzymology lab experiments

HT-MEK – short for High-Throughput Microfluidic Enzyme Kinetics – combines microfluidics and cell-free protein synthesis technologies to dramatically speed up the study of enzymes. (Image credit: Daniel Mokhtari)

Scientists now understand that these transformations are powered by enzymes – protein molecules comprised of chains of amino acids that act to speed up, or catalyze, the conversion of one kind of molecule (substrates) into another (products). In so doing, they enable reactions such as digestion and fermentation – and all of the chemical events that happen in every one of our cells – that, left alone, would happen extraordinarily slowly.

“A chemical reaction that would take longer than the lifetime of the universe to happen on its own can occur in seconds with the aid of enzymes,” said Polly Fordyce , an assistant professor of bioengineering and of genetics at Stanford University.

While much is now known about enzymes, including their structures and the chemical groups they use to facilitate reactions, the details surrounding how their forms connect to their functions, and how they pull off their biochemical wizardry with such extraordinary speed and specificity are still not well understood.

A new technique, developed by Fordyce and her colleagues at Stanford and detailed this week in the journal Science , could help change that. Dubbed HT-MEK — short for High-Throughput Microfluidic Enzyme Kinetics — the technique can compress years of work into just a few weeks by enabling thousands of enzyme experiments to be performed simultaneously. “Limits in our ability to do enough experiments have prevented us from truly dissecting and understanding enzymes,” said study co-leader Dan Herschlag , a professor of biochemistry at Stanford’s School of Medicine.

enzymology lab experiments

Closeup image of the HT-MEK device shows the individual nanoliter-sized chambers where enzyme experiments are performed. (Image credit: Daniel Mokhtari)

By allowing scientists to deeply probe beyond the small “active site” of an enzyme where substrate binding occurs, HT-MEK could reveal clues about how even the most distant parts of enzymes work together to achieve their remarkable reactivity.

“It’s like we’re now taking a flashlight and instead of just shining it on the active site we’re shining it over the entire enzyme,” Fordyce said. “When we did this, we saw a lot of things we didn’t expect.”

Enzymatic tricks

HT-MEK is designed to replace a laborious process for purifying enzymes that has traditionally involved engineering bacteria to produce a particular enzyme, growing them in large beakers, bursting open the microbes and then isolating the enzyme of interest from all the other unwanted cellular components. To piece together how an enzyme works, scientists introduce intentional mistakes into its DNA blueprint and then analyze how these mutations affect catalysis.

This process is expensive and time consuming, however, so like an audience raptly focused on the hands of a magician during a conjuring trick, researchers have mostly limited their scientific investigations to the active sites of enzymes. “We know a lot about the part of the enzyme where the chemistry occurs because people have made mutations there to see what happens. But that’s taken decades,” Fordyce said.

But as any connoisseur of magic tricks knows, the key to a successful illusion can lie not just in the actions of the magician’s fingers, but might also involve the deft positioning of an arm or the torso, a misdirecting patter or discrete actions happening offstage, invisible to the audience. HT-MEK allows scientists to easily shift their gaze to parts of the enzyme beyond the active site and to explore how, for example, changing the shape of an enzyme’s surface might affect the workings of its interior.

“We ultimately would like to do enzymatic tricks ourselves,” Fordyce said. “But the first step is figuring out how it’s done before we can teach ourselves to do it.”

Enzyme experiments on a chip

The technology behind HT-MEK was developed and refined over six years through a partnership between the labs of Fordyce and Herschlag. “This is an amazing case of engineering and enzymology coming together to — we hope — revolutionize a field,” Herschlag said. “This project went beyond your typical collaboration — it was a group of people working jointly to solve a very difficult problem — and continues with the methodologies in place to try to answer difficult questions.”

HT-MEK combines two existing technologies to rapidly speed up enzyme analysis. The first is microfluidics, which involves molding polymer chips to create microscopic channels for the precise manipulation of fluids. “Microfluidics shrinks the physical space to do these fluidic experiments in the same way that integrated circuits reduced the real estate needed for computing,” Fordyce said. “In enzymology, we are still doing things in these giant liter-sized flasks. Everything is a huge volume and we can’t do many things at once.”

The second is cell-free protein synthesis, a technology that takes only those crucial pieces of biological machinery required for protein production and combines them into a soupy extract that can be used to create enzymes synthetically, without requiring living cells to serve as incubators.

“We’ve automated it so that we can use printers to deposit microscopic spots of synthetic DNA coding for the enzyme that we want onto a slide and then align nanoliter-sized chambers filled with the protein starter mix over the spots,” Fordyce explained.

enzymology lab experiments

The scientists used HT-MEK to study how mutations to different parts of a well-studied enzyme called PafA affected its catalytic ability. (Image credit: Daniel Mokhtari)

Because each tiny chamber contains only a thousandth of a millionth of a liter of material, the scientists can engineer thousands of variants of an enzyme in a single device and study them in parallel. By tweaking the DNA instructions in each chamber, they can modify the chains of amino acid molecules that comprise the enzyme. In this way, it’s possible to systematically study how different modifications to an enzyme affects its folding, catalytic ability and ability to bind small molecules and other proteins.

When the team applied their technique to a well-studied enzyme called PafA, they found that mutations well beyond the active site affected its ability to catalyze chemical reactions — indeed, most of the amino acids, or “residues,” making up the enzyme had effects.

The scientists also discovered that a surprising number of mutations caused PafA to misfold into an alternate state that was unable to perform catalysis. “Biochemists have known for decades that misfolding can occur but it’s been extremely difficult to identify these cases and even more difficult to quantitatively estimate the amount of this misfolded stuff,” said study co-first author Craig Markin, a research scientist with joint appointments in the Fordyce and Herschlag labs.

“This is one enzyme out of thousands and thousands,” Herschlag emphasized. “We expect there to be more discoveries and more surprises.”

Accelerating advances

If widely adopted, HT-MEK could not only improve our basic understanding of enzyme function, but also catalyze advances in medicine and industry, the researchers say. “A lot of the industrial chemicals we use now are bad for the environment and are not sustainable. But enzymes work most effectively in the most environmentally benign substance we have — water,” said study co-first author Daniel Mokhtari, a Stanford graduate student in the Herschlag and Fordyce labs.

Movie shows fluorescence buildup denoting catalytic reactions in a portion of the HT-MEK device over time. (Video credit: Craig Markin and Daniel Mokhtari)

HT-MEK could also accelerate an approach to drug development called allosteric targeting, which aims to increase drug specificity by targeting beyond an enzyme’s active site. Enzymes are popular pharmaceutical targets because of the key role they play in biological processes. But some are considered “undruggable” because they belong to families of related enzymes that share the same or very similar active sites, and targeting them can lead to side effects. The idea behind allosteric targeting is to create drugs that can bind to parts of enzymes that tend to be more differentiated, like their surfaces, but still control particular aspects of catalysis. “With PafA, we saw functional connectivity between the surface and the active site, so that gives us hope that other enzymes will have similar targets,” Markin said. “If we can identify where allosteric targets are, then we’ll be able to start on the harder job of actually designing drugs for them.”

The sheer amount of data that HT-MEK is expected to generate will also be a boon to computational approaches and machine learning algorithms, like the Google-funded AlphaFold project, designed to deduce an enzyme’s complicated 3D shape from its amino acid sequence alone. “If machine learning is to have any chance of accurately predicting enzyme function, it will need the kind of data HT-MEK can provide to train on,” Mokhtari said.

Much further down the road, HT-MEK may even allow scientists to reverse-engineer enzymes and design bespoke varieties of their own. “Plastics are a great example,” Fordyce said. “We would love to create enzymes that can degrade plastics into nontoxic and harmless pieces. If it were really true that the only part of an enzyme that matters is its active site, then we’d be able to do that and more already. Many people have tried and failed, and it’s thought that one reason why we can’t is because the rest of the enzyme is important for getting the active site in just the right shape and to wiggle in just the right way.”

Herschlag hopes that adoption of HT-MEK among scientists will be swift. “If you’re an enzymologist trying to learn about a new enzyme and you have the opportunity to look at 5 or 10 mutations over six months or 100 or 1,000 mutants of your enzyme over the same period, which would you choose?” he said. “This is a tool that has the potential to supplant traditional methods for an entire community.”

Fordyce is a member of Stanford Bio-X and the Wu Tsai Neurosciences Institute , and an executive committee member of Stanford ChEM-H . Herschlag is member of Bio-X and the Stanford Cancer Institute , and a faculty fellow of ChEM-H. Other Stanford co-authors include Fanny Sunden, Mason Appel, Eyal Akiva, Scott Longwell and Chiara Sabatti.

The research was funded by Stanford Bio-X, Stanford ChEM-H, the Stanford Medical Scientist Training Program, the National Institutes of Health, the Joint Initiative for Metrology in Biology, the Gordon and Betty Moore Foundation, the Alfred P. Sloan Foundation, the Chan Zuckerberg Biohub and the Canadian Institutes of Health Research.

To read all stories about Stanford science, subscribe to the biweekly  Stanford Science Digest .

Media Contacts

Ker Than, Stanford News Service: (650) 723-9820, [email protected]

W

  • General Chemistry
  • Pharmaceutical & Medicinal Chemistry
  • Drug Discovery & Development

enzymology lab experiments

13 Enzymes Lab Report Activities 

January 4, 2023 //  by  Alison Vrana

Learning about enzymes is important to build basic skills and an understanding of biological processes. An enzyme is a protein that helps chemical reactions to occur in the body. Digestion, for example, wouldn’t be possible without enzymes. In order to help students better understand the ability of enzymes, teachers often assign labs and lab reports. The experiment activities below explore how enzymes react under different experimental conditions such as temperature, pH, and time. Each enzymatic activity is engaging and can be adapted for any level of science class. Here are 13 enzyme lab report activities for you to enjoy. 

1. Plant and Animal Enzyme Lab

This lab explores an enzyme that is common to both plants and animals. Firstly, students will explore important concepts about enzymes; including what enzymes are, how they help cells, and how they create reactions. During the lab, students will look at plants and animals and discover enzymes that are common to both.

Learn More: Amy Brown Science

2. Enzymes and Toothpicks

This lab explores enzymes using toothpicks. Students will practice different simulations with toothpicks to see how enzyme reactions can change with different variables. Students will look at enzyme reaction rates, how enzymes react with substrate concentration and the effect of temperature on enzyme reactions.

Learn More: Science Buddies

3. Hydrogen Peroxide Lab

enzymology lab experiments

In this lab, students explore how enzymes break down hydrogen peroxide using different catalysts. Students will use liver, manganese, and potato as catalysts. Each catalyst produces a unique reaction with hydrogen peroxide.

Learn More: Royal Society of Chemistry

4. Critical Thinking With Enzymes

enzymology lab experiments

This is an easy assignment that encourages students to think about what they know about enzymes and apply their knowledge to real-world scenarios. Students will think about how enzymes impact bananas, bread, and body temperature.

Learn More: The Science Teacher

5. Enzymes and Digestion

This fun lab explores how catalase, an important enzyme, protects the body from cell damage. Kids will use food coloring, yeast, dish soap, and hydrogen peroxide to simulate how enzymes react in the body. Once students complete the lab, there are also several activities for extension learning.

6. Enzymes in Laundry and Digestion

enzymology lab experiments

In this activity, students will take a look at how enzymes to aid digestion and laundry. Students will read A Journey Through the Digestive System and  Amazing Body Systems: Digestive System, along with watching several videos in order to prepare to discuss how enzymes aid in digestion and the cleaning of clothes.

Learn More: Teach Engineering

7. Lactase Lab

enzymology lab experiments

Students investigate the enzyme lactase in rice milk, soy milk, and cow’s milk. During the lab, students will be able to identify the sugars in each type of milk. They will run the experiment with and without lactase to assess the glucose levels in each sample.

Learn More: Learning Undefeated

8. Catalase Enzyme Lab

enzymology lab experiments

In this lab, students assess how temperature and pH affect catalase efficiency. This lab uses potatoes to measure how pH affects catalase. Then, students repeat the experiment by changing the temperature of either the potato puree or the hydrogen peroxide to measure the effect of temperature on catalase.

Learn More: Science Lessons that Rock

9. How Heat Affects Enzymes

In this hands-on experiment, your pupils will learn how heat affects the activity of enzymes. Start by going through these informative examples and explanations with your class before encouraging them to use objects like pineapples and marshmallows to transform theory into reality.

Learn More: Expii

10. Enzymatic Virtual Lab

This website offers games that teach students about biology concepts such as enzymes. This virtual lab covers enzymes, substrates, enzyme shapes, and variables that affect enzyme reactions. Kids complete the lab online via a virtual portal.

Learn More: Bioman Biology

11. Enzyme Simulation

enzymology lab experiments

This website shows students how enzymes react in real-time via an online simulation. This simulation helps students make cognitive connections from physical labs. This simulation shows how starch breaks down with different enzymatic reactions.

Learn More: Biology Simulations

12. Enzyme Function: Penny Matching

enzymology lab experiments

This is another online activity that challenges students to see the similarities between using a penny machine and the enzymatic process. Students will view the penny machine in action and then compare this process to an enzyme-catalyzed reaction. Then, students can answer challenging questions. 

Learn More: CK-12

13. Apples and Vitamin C

For this experiment, students will test how vitamin C affects apples. Students will observe an apple sprinkled with powdered vitamin C and an apple without any powder over a period of time. Students see how vitamin C slows the browning process.

Learn More: The Homeschool Scientist

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Wiley Open Access Collection

Logo of blackwellopen

Looking Back: A Short History of the Discovery of Enzymes and How They Became Powerful Chemical Tools

Christian m. heckmann.

1 School of Chemistry, University of Nottingham, University Park, Nottingham NG7 2RD UK

Prof. Francesca Paradisi

2 Department of Chemistry and Biochemistry, University of Bern, Freiestrasse 3, 3012 Bern Switzerland

Enzymatic approaches to challenges in chemical synthesis are increasingly popular and very attractive to industry given their green nature and high efficiency compared to traditional methods. In this historical review we highlight the developments across several fields that were necessary to create the modern field of biocatalysis, with enzyme engineering and directed evolution at its core. We exemplify the modular, incremental, and highly unpredictable nature of scientific discovery, driven by curiosity, and showcase the resulting examples of cutting‐edge enzymatic applications in industry.

In this historical review we highlight the developments across several disciplines that were necessary to create the modern field of biocatalysis and showcase the resulting examples of cutting‐edge enzymatic applications in industry.

An external file that holds a picture, illustration, etc.
Object name is CCTC-12-6082-g027.jpg

1. Introduction

Mesophilic organisms can carry out reactions under mild conditions, enabled by excellent catalysts: enzymes. Humans have, unknowingly at first, used these enzymes to their advantage for millennia, for example to ferment sugars into alcohol (as early as 7000 BC). [1] So how did we get from biocatalysis being used unknowingly to the modern application at the forefront of chemical synthesis? In this review, the origin of modern biocatalysis is summarized (Figure  1 ), starting with the discovery of enzymes, and exploring the principles of biochemistry and molecular biology developed throughout the 20 th century. These stepping stones lead to the biotechnological advancements at the turn of the century which resulted in increasingly sophisticated applications of enzymes, in particular in industry. This review aims at condensing these biological developments primarily from the point of view of a chemist and is primarily intended to help chemists, but also scientists from other disciplines, entering the field, while also showcasing selected cutting‐edge applications of enzymes that may be of interest to biologists as well.

An external file that holds a picture, illustration, etc.
Object name is CCTC-12-6082-g003.jpg

Timeline of major developments in enzymology, molecular biology, and biocatalysis.

2. Early enzymology‐demystifying life

In 1833, diastase (a mixture of amylases) was the first enzyme to be discovered, [2] quickly followed by other hydrolytic enzymes such as pepsin and invertase, [3] but the term enzyme was only coined in 1877 by Wilhelm Kühne. [4] The concept of catalysts, chemicals facilitating a reaction without undergoing any change themselves, was introduced in 1836 [5] by Berzelius who quickly hypothesized that enzymes were such catalysts. [6] Yeast, which had been observed in ethanolic fermentations, was also viewed as a catalyst, but soon it was discovered that it was a living organism which at the time seemed to contradict that concept. [7] Evidence from Pasteur that fermentation occurs in the absence of oxygen and failed attempts to isolate an enzyme able to carry out this transformation, were claimed as evidence by vitalists that a “life‐force” was necessary for these more complex transformations and that enzymes only carried out “simple” hydrolysis reactions. [8a] Indeed, this is often framed as a dispute between Pasteur and Liebig, with the former supporting vitalism and the latter supporting a mechanistic view that ascribes no special place to life. However, it appears more accurate to say that Pasteur supported the idea that fermentation was carried out by yeast through chemical means whereas Liebig opposed the idea of any causal link between yeast as a living organism and the catalytic fermentation reaction, instead thinking that the decay of yeast in the presence of oxygen was catalyzing the formation of alcohol from sugar. [8] Finally, in 1897, Eduard Büchner showed that a dead yeast extract could carry out the same fermentation reaction as living yeast, thus dealing the final blow to vitalism, which had already been on the decline (Nobel Prize in Chemistry 1907).[ 3 , 8a ]

The fermentation of sugars into ethanol and carbon dioxide was attributed to “zymase.” Further investigations started to reveal reaction intermediates, and dependency on phosphate and “co‐zymase” (A. Harden and, H. von Euler‐Chelpin; Nobel Prize in Chemistry 1929) and started to untangle glycolysis. However, the chemical nature of enzymes was still being debated. In 1926, James B. Sumner crystallized the first enzyme (urease), and confirmed it was a protein. [9] John H. Northrop also crystallized several other proteins, amongst them pepsin, trypsin, and chymotrypsin. [6] They were awarded the Nobel Prize in Chemistry in 1946; in his Nobel lecture, [9] Sumner remarks that

“The organic chemist has never been able to synthesize cane sugar, but by using enzymes, the biological chemist can synthesize not only cane sugar but also gum dextran, gum levan, starch and glycogen.”

Indeed, a whole range of industrial applications of mainly whole organisms but also some enzyme preparations had already been developed. For example, glycerol (used in the production of explosives) was produced on a 1000 ton per month scale in Germany during world war I, employing fermentation in yeast with the final acetaldehyde reduction step inhibited by sulfite, resulting in dihydroxyacetone phosphate reduction. By 1949, citric acid was almost exclusively produced using the fungus Aspergillus niger (ca. 26,000,000 pounds per year in the US alone), even though it was not understood how the organism produced it. [10] In 1934, a patent was granted for the condensation of acetaldehyde (produced in‐situ from glucose) with benzaldehyde catalyzed by whole yeast, giving l ‐phenylacetylcarbinol, which was then further reacted to give l ‐ephedrine, a stimulant used during anesthesia, as a decongestant, and also a pre‐cursor to illicit drugs such as methamphetamine (Scheme  1 ). [11] This procedure is still used today, highlighting the power of an efficient biocatalytic process. [12] Enzymes prepared from fungi or bacteria became alternatives to those initially obtained from plants or animals (e. g. amylases and proteases). Purified proteases were used to clarify beer since 1911, pectinases (from various fungi or malt) were used to clarify juices and wine. [9] In the early 1950s, several species of fungi were applied to the regio‐selective hydroxylation of steroids for the production of cortisone, which had been impossible using chemical means. [13]

An external file that holds a picture, illustration, etc.
Object name is CCTC-12-6082-g021.jpg

Synthesis of L‐ephedrine: Benzoin‐type addition of acetaldehyde (formed in‐situ by yeast metabolism) to benzaldehyde (now known to be catalyzed by pyruvate decarboxylase), [14] followed by reductive amination with methylamine. [11]

By 1949 a vast number of enzyme classes had been discovered and characterized extensively. Many pathways and intermediates were fully uncovered, yet little was known with regard to the mechanism by which individual enzymes worked. [9] Through the famous lock‐and‐key model, proposed by Emil Fischer in 1894, [15] as well as the Michaelis‐Menten model of enzyme kinetics from 1913 (Equation 1, Figure  2 ) [16] it was understood that a substrate has to bind to the enzyme prior to catalysis, yet how this binding proceeds and how catalysis occurs afterwards was unsolved. The ratio of k cat /k uncat has been found to be as high as 10 17 , crowning enzymes as exceptional catalysts. Interestingly, most enzymes have similar k cat values (within two orders of magnitude), while the rate constants of the corresponding un‐catalyzed reactions ( k uncat ) vary wildly. [17]

An external file that holds a picture, illustration, etc.
Object name is CCTC-12-6082-g004.jpg

Plots of the Michaelis‐Menten model, illustrating equation 1. A) Velocity vs. substrate concentration under constant enzyme concentration. The substrate affinity K m corresponds to the substrate concentration at which the reaction reaches half of v max , which is the velocity at infinite substrate concentration. B) v max vs. enzyme concentration. The reaction is first‐order with respect to enzyme concentration, with the rate constant k cat .

Equation 1: The Michaelis‐Menten model and equation, in modern form: a) Enzyme and substrate combine in a reversible fashion to form the enzyme‐substrate complex, which then goes on to release the enzyme and product in an irreversible reaction. b) Under the assumption of a steady state concentration of the enzyme‐substrate complex, the Michaelis‐Menten equation can be written, describing the consumption of substrate depending on substrate concentration [S], maximum velocity v max (itself dependent on enzyme concentration [E] 0 ), and substrate affinity K m .

3. Enzyme structures and elucidation of mechanisms

In 1948 Linus Pauling proposed that enzymes had to stabilize the transition state rather than the substrate as proposed by Fischer. [18] The detailed concept of a transition state itself had only been developed less than two decades earlier. [19] This was further refined by Koshland in 1958, [20] proposing the concept of “induced‐fit,” explaining the specificity of enzymes on an abstract level. In parallel to this abstract understanding, a more detailed understanding of the structure of proteins was being developed. It had been hypothesized since the beginning of the 20 th century that proteins were composed of chains of amino acids connected via amide bonds; [21] however, the order or even the relative amount of amino acids was not well understood, and indeed the peptide hypothesis itself was frequently questioned. [22] In 1951, Sanger determined the amino acid sequence (referred to as primary structure) of insulin, revealing that indeed as expected it was a well‐defined sequence of amino acids linked by amide bonds. [23] He received his first Nobel Prize in Chemistry in 1958 for this work.

Famously, Linus Pauling proposed how a chain of amino acids might fold into regular geometric features (i. e. α‐helices and β‐sheets; referred to as secondary structure, Figure  3 ) while sick in bed, based on his detailed understanding of the rigidity of the amide bond and “reasonable” interatomic distances. The rigorous understanding of chemical bonds had just been developed for which Pauling received his Nobel Prize in Chemistry in 1954. [24]

An external file that holds a picture, illustration, etc.
Object name is CCTC-12-6082-g005.jpg

The original drawings of an α‐helix (left) and parallel and anti‐parallel β‐sheets (right), published by Pauling in 1951.[ 24b , 24d ]

In the meantime, X‐ray crystallography, developed from 1912 onward by Max von Laue, (Nobel Prize in Physics in 1914) and William and Lawrence Bragg (Nobel Prize in Physics in 1915), had become more sophisticated and was being applied to increasingly complex compounds. Evidence for Pauling's secondary structure from X‐ray diffraction was reported by Max Perutz in 1951. [25] The first structures of proteins were solved in 1958–1960 by John Kendrew and Max Perutz (Nobel Prize in Chemistry in 1962). [26] This was initially met with some degree of disappointment as it revealed that proteins were “messy” (Figure  4 ) and squashed the hope that solving the structure of one protein would reveal the structure of all proteins (in contrast to DNA where that expectation largely held true). [27] However, as higher resolution structures were obtained the insight that could be gained into the mystery world of enzymes became apparent and many groups set forth to investigate not just proteins but enzymes (Figure  4 ). [28]

An external file that holds a picture, illustration, etc.
Object name is CCTC-12-6082-g006.jpg

Top left: Clay model of the first X‐ray structure of a protein, myoglobin, at 6 Å resolution. [26a] Right: electron density sections of myoglobin at 2 Å resolution and sketch of groups coordinated to iron. [26b] Bottom left: Model of the catalytic triad and oxyanion hole of chymotrypsin, as inferred from crystal‐structures. [31c]

Structures of lysozyme were solved in 1965, [29] and included structures of the enzyme with inhibitors bound to it, revealing the location and residues of its active site. Other enzyme structures solved around this time include bovine carboxypeptidase A in 1967, [30] both with and without substrate bound‐revealing conformational changes (in agreement with the induced fit hypothesis) as well as key interactions between substrate and enzyme. The crystal structure of chymotrypsin (also in 1967) [31] paved the way to uncover the classic catalytic triad and oxyanion hole of proteases (as well as esterases and other hydrolytic enzymes; Figure  4 ). In 1971, the Protein Data Bank (PDB) was founded with seven structures, [28] reaching 50 structures in 1979, and 100 structures three years later. At the end of 2019 it contained almost 160,000 structures. [32]

In parallel to the increasing understanding of the structure of enzymes, newly developed physical‐chemistry techniques were also employed to elucidate mechanisms, such as detailed kinetics, isotopic labelling, isotope effects, and spectroscopic techniques. [33] The first mechanisms to be elucidated in that way were of enzymes employing co‐enzymes, as the structures (fragments) of co‐enzymes were determined before the structures of whole proteins. Indeed, as early as 1936, [34] Otto Warburg showed that certain pyridines (analogous to nicotinamide that could be obtained from hydrolysis “co‐zymase”) could transfer hydrides reversibly, implying that such a hydride transfer plays a role during glycolysis (he had previously received the Nobel Prize in Physiology or Medicine in 1931 for his work on the role of iron in respiration).

The full structure of thiamine (cocarboxylase) had been proved in 1936 by Williams and Cline. [35] The structures of pyridoxine, as well as of biologically relevant derivatives pyridoxal and pyridoxamine were established in the early 1940s by Esmond Snell soon after the discovery of transaminases. [36] Full structures of NAD(P)(H) (co‐zymase), [37] ATP, [38] and FAD [39] were proved by Alexander Todd in the late 1940s and 50s (Nobel Prize in Chemistry in 1957). The structure of Vitamin B 12 (cyanocobalamin) was solved through X‐ray crystallography by Dorothy Hodgkin in 1955 [40] (Figure  5 ; Nobel Prize in Chemistry in 1964).

An external file that holds a picture, illustration, etc.
Object name is CCTC-12-6082-g007.jpg

Selected structures of common cofactors that were known by 1955: NAD + , NADP + and FAD are redox catalysts, ATP transfers energy released during glycolysis, thiamine is the co‐factor of pyruvate decarboxylase during fermentation, and pyridoxal is the co‐factor of transaminases, which are of particular industrial importance, as well as racemases, decarboxylases, and lysases involved in amino acid metabolism.

The chemistries of those co‐factors could be investigated in the absence as well as in the presence of their enzymes, and from this, mechanistic details could be inferred. In addition, structural analogues could be synthesized and their reactivities compared. For example, careful isotope labelling studies in the early 1950s revealed that one hydride of the pyridine ring of NAD(P)H was transferred during reduction/oxidation in a stereospecific manner, giving additional detail to Otto Warburg's mechanism (Figure  6 ). [41] In 1957, Breslow showed by NMR that an anion in position 2 of a thiazolium ring could exist, revealing the reactive center of thiamine (Figure  6 ). [42] The observation that pyridoxal, the co‐factor of transaminases, as well as structural analogues with electron withdrawing groups on the aromatic ring, can catalyze transamination in the absence of the enzyme allowed Alexander Braunstein and Esmond Snell to postulate independently a likely catalytic cycle in 1954, which later proved to be correct (Figure  7 ).[ 33b , 36b , 43 ]

An external file that holds a picture, illustration, etc.
Object name is CCTC-12-6082-g008.jpg

Selected mechanisms of co‐factors that were being elucidated in the 1950s: enantiospecificity during hydride transfer from NAD(P)H in alcohol dehydrogenases, and thiamine‐dependent decarboxylation. [33b]

An external file that holds a picture, illustration, etc.
Object name is CCTC-12-6082-g009.jpg

Mechanism of transamination. For clarity, the individual steps of aldimine/ketimine formation and hydrolysis as well as transimination are not shown. The mechanism is symmetric – referred to as a “ping‐pong” bi‐bi or shuttle mechanism – and fully reversible. Note: the ketimine intermediates are a second aldimine if one of the R‐groups is a hydrogen. Catalytic lysine: red; amine donor: blue; ketone acceptor: green.

However, these advances in the knowledge of how enzymes work had no immediate impact on industrial biocatalysis, which was largely limited by the low quantities most enzymes could be obtained in. Major developments at the time include the application of glucose isomerase for the production of high fructose corn syrup (HFCS) and the development of a penicillin acylase process for the production of 6‐aminopenicillanic acid (6‐APA, at the time obtained from chemical cleavage of penicillin), a building block for semi‐synthetic antibiotics such as ampicillin and amoxycillin (Scheme  2 ). [44] Key for the success of both applications was the discovery of the possibility to immobilize proteins with retention of their function as discovered in the 1950s. [45] This allowed the enzymes to be recycled and used in a continuous fashion, reducing cost by reducing the quantity of enzyme that has to be isolated. The HFCS processes became wide‐spread in the 1970s. [46] However, the production of 6‐APA via chemical hydrolysis predominated until the early 1990s, at least partially due to the difficulty of obtaining sufficient quantities of penicillin acylase before then. [47] A notable exception is Bayer, who used an immobilized penicillin acylase since 1972 as a closely guarded secret, employing E. coli strains that achieved a penicillin acylase content of ca. 20 %. [48] Processes to synthesize amino acids using immobilized enzymes (as well as whole cells) were being commercialized in Japan from 1973. [49]

An external file that holds a picture, illustration, etc.
Object name is CCTC-12-6082-g022.jpg

Top: Isomerization of d ‐glucose to d ‐fructose, catalyzed by an immobilized glucose isomerase as used in the production of HFCS. Bottom: Hydrolysis of Penicillin to give 6‐APA, which can then be acylated to give several semi‐synthetic antibiotics.

4. The DNA revolution

In order for enzymes to enjoy more widespread use, their production had to be ramped up dramatically. In addition, to apply the insights into enzyme mechanisms described above, enzyme active sites had to be tweaked somehow. The key to both these problems was the understanding of how DNA encodes proteins, as well as the development of efficient ways of manipulating DNA. Of course, in parallel to the research into proteins described above, research into DNA was also ongoing. While DNA was originally viewed as less important than proteins (due to its simple make‐up of four building blocks), this view quickly changed with the discovery that it was the carrier of hereditary information by Avery in 1944. [50] Of course, the correct structure of DNA was postulated in 1953 by Watson and Crick [51] (Nobel Prize in Physiology or Medicine in 1962) from X‐ray diffraction data by Rosalind Franklin.

This quickly led to a postulation of how genetic information is encoded in DNA: the hypothesis that DNA encodes amino acid sequences [26e] and that the amino acid sequence alone determines the structure of proteins, as demonstrated by Anfinsen in 1961 [52] (Nobel Prize in Chemistry in 1972). The process of transcription of DNA into mRNA and the translation of mRNA into protein itself was broadly solved within a decade of the discovery of the structure of DNA; transcription as a concept was proposed by François Jacob and Jacques Monod in 1961 [53] (Nobel Prize in Physiology or Medicine in 1965). In the same year, mRNA was discovered, [54] the triplet code was established, [55] and the first codon (UUU) was solved by Nirenberg. [56] In competition with several other groups, [57] amongst them Gobind Khorana developing a sequence specific chemical synthesis of polynucleotides, the genetic code (Figure  8 ) was fully solved by 1966 and its universality established by 1967 [58] (Nirenberg, Khorana and Robert W. Holley (for the isolation of tRNA) received the Nobel Prize in Physiology or Medicine in 1968).

An external file that holds a picture, illustration, etc.
Object name is CCTC-12-6082-g010.jpg

The genetic code. Codons consisting of three bases (triplets) correspond to different amino acids. Each amino acid may be spelled by multiple codons (the code is degenerate). The chart is read from the inside outwards (following the red arrows), e. g. “AUG” corresponds to the start codon, methionine, corresponding to the beginning of a protein. The genetic code is universal, i. e. identical across all organisms with only a few exceptions. https://commons.wikimedia.org/wiki/File:Aminoacids_table.svg, public domain.

Understanding the meaning of the genetic code of course is of limited use, unless one can also read the DNA sequence. Frederick Sanger developed an ingenious hi‐jacking of normal DNA replication in 1977 (Figure  9 ): [59] by supplying a small quantity of nucleotides that could not be further extended (because they missed the 3′‐OH), strands of DNA truncated after every A (or C, G, or T; depending on which was supplied as the dideoxynucleotide) were produced, which could be separated by size using gel electrophoresis. Repeating this experiment for all four nucleotides, the sequence of bases in the template could be deduced. This is earned Frederick Sanger his second Noble Prize in Chemistry in 1980. While initially DNA was visualized using radioactive labels, this was quickly replaced by using fluorescently labelled dideoxynucleotides, allowing for all four bases to be present in the same reaction mixture, increasing throughput. Using capillary electrophoresis, automated sequencing became possible. [60] Next‐generation sequencing, introduced in the mid‐2000s, involves the massive parallelized sequencing of many smaller DNA segments that are then assembled in‐silico . [61] This has significantly reduced the cost and time of genome sequencing, and thus resulted in a dramatic increase of available genomes, with over 55000 genomes deposited in the NCBI database, as of August 2020. [62]

An external file that holds a picture, illustration, etc.
Object name is CCTC-12-6082-g011.jpg

Principle of Sanger sequencing: A DNA strand (blue) is copied by DNA polymerase. If a small quantity of dideoxynucleotides (ddNTP) is offered in addition to deoxynucleotides (for example ddCTP), chains will terminated whenever a ddCTP is incorporated instead of a dCTP, as the 3′‐OH needed for chain‐extension is missing. If this experiment is repeated for all four nucleotides, and the products are separated by size, the sequence of the DNA template can be inferred. Modern Sanger sequencing includes all four ddNTPs in a single sequencing reaction, and distinguishes incorporation of the different bases at the termination site via fluorescent labels, such as the label (red) for the ddT – BigDye terminator shown. [60]

At the same time and leading on from Avery's experiment, the transmission of genetic information in bacteria was being investigated. In 1952, Joshua Lederberg coined the term “plasmid” to describe such transmissible DNA and discovered the nature of its transmission (Nobel Prize in Physiology or Medicine in 1958). [63] Also in 1952, Salvador Luria [64] and Giuseppe Bertani [65] observed that bacteriophages from one strain of E. coli have a decreased virulence in another, but upon growth in the second strain would show increased virulence for it and a decreased virulence for the original strain, observing the effect of restriction enzymes (so called because they restrict the growth of bacteriophage). This effect was then also observed for several other bacteria. However, it was only in the early 1960s that the nature of these enzymes as site specific endonucleases, and that host bacteria protect their own DNA through modification (methylation), was proposed by Werner Arber. [66] Mathew Meselson isolated the first such restriction endonuclease in 1968 (EcoK1). [67] These early restriction enzymes recognized a specific sequence but did not cut in a specific location, and are now known as type I restriction enzymes. Type II restriction enzymes, cutting DNA in specific locations (Figure  10 ), were discovered by Hamilton Smith in 1970 (HindII and HindIII). [68] In conjunction with gel electrophoresis, this allowed the digestion of DNA into fragments of defined size which could then be separated, as shown by Daniel Nathans in 1971. [69] Arber, Smith, and Nathans won the Nobel Prize in Physiology or Medicine in 1978.

An external file that holds a picture, illustration, etc.
Object name is CCTC-12-6082-g012.jpg

Type I restriction enzymes cut at a non‐defined remote location from the recognition site (example of EcoKI). Type II restriction enzymes cut at a well‐defined position within (or close to) the recognition sequence, often in a staggered way, producing cohesive ends (example of HindIII).

As many type II restriction enzymes produce palindromic single stranded overhangs, it was then realized that DNA from different sources could be stitched together if cut with the same restriction enzyme, through the action of DNA ligase. The first such “recombinant” DNA was reported by Paul Berg in 1972 (Nobel Prize in Chemistry in 1980). [70] It thus became possible to introduce any piece of DNA from any organism into (for example) E. coli . [71] Thus, plasmids for convenient introduction of such recombinant DNA were being developed in the 1970s. [70c] On of the most famous of these plasmids is pBR322, developed by Bolivar and Rodriguez (BR) in 1977.[ 70c , 72 ] This plasmid made use of two antibiotic resistance genes and several unique restriction sites within them to allow for the selection of colonies that had a) up‐taken the plasmid and b) up‐taken a plasmid containing an insert (Figure  11 ). The propagation of recombinant DNA in a new host is referred to as cloning. The pUC series of plasmids (UC for University of California), derived from pBR322, allowed for colorimetric detection of inserts. [73] Finally, the pET series of vectors, also derived from pBR322, was created in the late 1980s and included a T7 promotor, allowing for the selective expression of the DNA insert (ET for Expression by T7 RNA Polymerase).[ 74a , 74b ] Strains of E. coli were generated containing the gene for the T7 polymerase, under the control of a modified lac promoter (lacUV5), as a lysogen of the DE3 phage.[ 74b , 74c , 74d , 74e ] Alternative promotor systems were also developed, such as the aforementioned lac promoter, as well as the trc promoter, p L promoter, and tet A promoter, and more, each with their own advantages and draw‐backs. [75]

An external file that holds a picture, illustration, etc.
Object name is CCTC-12-6082-g013.jpg

Left: Map of pBR322, showing the unique restriction sites inside both antibiotic resistance genes. Right: Generic map of a typical (empty) expression vector, having an origin of replication (for replication in vivo) , a selective marker (Ampicillin resistance in this instance), and the T7 promoter and terminator flanking a His‐tag (to allow purification of the insert) and the multiple cloning site, which contains a large number of unique restriction sites (not shown) for easy cloning.

These developments in molecular biology revolutionized enzymology and biocatalysis. For the first time, the DNA sequence of an enzyme of interest could be determined and cloned, and the enzyme could be over‐expressed in E. coli (or another suitable organism) and thus be obtained in sufficient quantities to be studied and used in industrial applications. The first recombinant protein produced was insulin in 1978, and the commercial production of human insulin started in 1982. [76] Prior to that, insulin had to be isolated from pigs or cows and often had limited and inconsistent efficacy, as well as inconsistent supply. [77] DNA recombinant technology also allowed penicillin acylase to be obtained in sufficient quantities and enabled its widespread application toward the synthesis of 6‐APA, as mentioned above.[ 45e , 48 ] Indeed, penicillin acylase was one of the first enzymes expressed recombinantly, in 1979, only one year after insulin.[ 45d , 48 ] The availability of this enzyme also enabled the development of its application in the reverse direction, catalyzing the amide bond formed between 6‐APA and the side chains found in semi‐synthetic antibiotics such as amoxicillin and ampicillin (Scheme  3 , also see below for a discussion of the role of immobilization).[ 44 , 47 , 78 ] Around the same time, recombinant chymosin (which selectively hydrolyzes casein between residues F105 and M106, resulting in the curdling of milk) started replacing natural rennet, obtained from calf stomachs, in cheese‐making. [79] This provided a cheaper, more stable supply for cheesemaking as well as more consistent results due to a higher purity. By 2006, up to 80 % of all rennet was recombinant chymosin and cheese production in the US had increased over two‐fold. [80]

An external file that holds a picture, illustration, etc.
Object name is CCTC-12-6082-g023.jpg

Application of penicillin acylase for the synthesis of amoxicillin and ampicillin from 6‐APA. X=NH 2 or OMe.

5. Directed evolution and the beginnings of modern biocatalysis

As great as natural enzymes are at carrying out their function, they often present drawbacks which make them unsuitable for industrial applications such as their lack of stability, (co‐)solvent‐tolerance, or a very limited substrate scope. While immobilization can address the stability problems (see below), [81] it quickly became desirable to be able to change the properties of the enzymes themselves. Of course, to some extend this had already been done routinely, through strain optimization. Whole organisms were subjected to mutation‐inducing conditions, such as radiation or chemical agents, and the resulting strains were screened for favorable phenotypes. [82] Through this method, strains producing larger quantities of desirable products, either specific enzymes (such as in the case of penicillin acylase at Bayer mentioned above) or chemicals could be obtained, and entirely new pathways could be introduced. [83] However, these approaches were slow, unlikely to directly change the properties of any specific enzyme, and could only really be applied to organisms with sufficiently short replication‐cycles. However, with the availability of recombinant DNA, as well as the understanding of enzymes and their mechanisms as outlined above, introduction of specific mutations into a target enzyme was now within reach, irrespective of the organism it originated from. Indeed, a general method for site‐directed mutagenesis was being reported by Michael Smith in 1978 (Nobel Prize in Chemistry in 1993), the same year as the cloning of insulin was achieved. By designing DNA primers, harboring the desired mutations, complimentary to the target sequence to be mutated, and extending with DNA polymerase using the target sequence as a template, copies containing the mutation could be made (Figure  12 ). [84] Of course, efficient syntheses of specific DNA sequences such as those developed by Khorana, [85] Gillam, [86] and Caruthers [87] were instrumental for this. [88]

An external file that holds a picture, illustration, etc.
Object name is CCTC-12-6082-g014.jpg

The principles of PCR: DNA is denatured at high temperature, primers supplied in the reaction mixture are annealed, and the template is copied. Repeated cycles exponentially amplify the target sequence. Site‐directed mutagenesis: a mutation is incorporated in the primer; the amplified product now contains the changed base‐pair. epPCR: a polymerase that occasionally incorporates incorrect nucleotides is used. The product now contains a set of different sequences that differ from the parent in a few positions. Recombination: several sequences are shuffled to produce a diverse set of new sequences from the parents.

Using this approach, the role of catalytic residues could be directly investigated. For example changing the cysteine in the catalytic triad found in tyrosyl tRNA synthase to a serine (as found in esterases mentioned above) greatly reduces the efficiency of the enzyme. [89] Indeed, the role of several residues in several enzymes could now be quantified, confirming and in some cases revising mechanisms that had been postulated based on crystallography. [90a] However, using this technique to introduce desirable properties into enzymes was quickly met with the realization that the effect of mutations was often unpredictable, and rational engineering of enzymes was often not successful. However, a shift was quickly made toward a more random approach such as the use of site‐saturation mutagenesis, where targeted residues were changed to all possible amino acids rather than a specific one. Through this, some progress was made such as the introduction of a stabilizing mutation into subtilisin, a protease with application in laundry detergents,[ 81 , 90 ] or enhanced thermostability of glucose isomerase. [91]

With the development of the polymerase chain reaction (PCR, Figure  12 ) in the 1980s by Kary Mullis (Nobel Prize in Chemistry in 1993), it became possible to produce large numbers of copies of DNA sequences from a single template. [92] By modulating the fidelity of the polymerase, random mutations could be introduced into the amplified product (error‐prone or epPCR, Figure  12 ). In the early 1990s, Frances Arnold used this technique to create large libraries of mutants to which she then applied evolutionary pressure. In her own words, [93] she

“rejected microbial growth or survival selections favored by microbiologists and geneticists. Thus we turned to good old‐fashioned analytical chemistry to develop reproducible, reliable screens that reported what mattered to us.”

In doing so she managed to produce a variant of subtilisin E that could tolerate high concentrations of DMF, introducing a total of 10 mutations. [94] Thus, the field of directed evolution was born. In 1994, Pim Stemmer introduced the concept of DNA shuffling (Figure  12 ), mimicking DNA recombination which occurs in organisms as a way to increase genetic diversity, and applying it to recombinant DNA in vitro . Without being restricted to genes from a single species, very diverse proteins could be mixed together to create new sequences very distant from natural ones. [95] This technique proved very powerful on its own, but especially when combined with epPCR, allowing the combination of mutations from several mutants without the need for any understanding of how the different mutations would interact with each other. [96] Frances Arnold received the Nobel Prize in Chemistry in 2018 for the directed evolution of enzymes. In addition to co‐solvent tolerance, directed evolution was quickly used to create enzymes with improved thermostability,[ 97a , 97b , 97c , 97d ] pH stability, [97c] as well as enhanced activity at low temperatures,[ 97c , 97e ] activity toward unnatural substrates,[ 96 , 98 ] modified enantioselectivity, [99] or combinations of the above. Thus, it quickly established itself as a powerful tool in protein engineering across structurally and functionally diverse classes of enzymes. Using random mutagenesis methods it was quickly realized that beneficial mutations were often found in unexpected parts of the enzymes, explaining why early rational attempts struggled at accomplishing these modifications.[ 81 , 93 , 100 ]

The sudden availability of biocatalysts with properties suitable for industrial applications, as well as the ability to create those properties at will, made them very attractive for use in synthetic applications that so far had been considered out of reach. [101] Of course, catalysis itself had become a major field of interest in synthetic chemistry in the second half of the 20 th century, as the environmental impact of traditional (stoichiometric) chemistry was gaining attention. [102] This gained more traction with the conceptual development of green chemistry in the 1990, in parallel to the advances made in biocatalysis outlined above. [103] Thus, it is not surprising that biocatalysis formed a key strategy of accomplishing the goals of green chemistry from the start. [102b] Indeed, it promises to address many of the “12 principles of green chemistry” (Figure  13 ), in particular with regard to hazardous reagents and waste, energy requirements, number of steps, and their inherently renewable and biodegradable nature. [104] Indeed, the number of biocatalytic processes in industry started increasing rapidly and continues to do so to this day: there were around 60 processes in 1990, 134 processes in 2002, and several hundred by 2019. [105]

An external file that holds a picture, illustration, etc.
Object name is CCTC-12-6082-g015.jpg

The 12 principles of green chemistry, reproduced with permission from ACS Green Chemistry Institute® (https://www.acs.org/content/acs/en/greenchemistry/principles/12‐principles‐of‐green‐chemistry.html). Copyright 2020 American Chemical Society.

Perhaps one of the most successful examples developed at the time was the use of lipases, in particularly CalB from Candida antarctica , in organic solvents, allowing ester and amide formation without competing hydrolysis, which is frequently employed in (dynamic) kinetic resolutions of chiral alcohols and amines. The latter was developed at BASF [106] and is often referred to as the “BASF process” or “ChiPros technology” (Scheme  4 ). [107] By 2004, multiple BASF plants produced chiral amines on a >1000 ton scale per year, and this process is still in use today. The reactions can be carried out without solvent, are often nearly quantitative, both amine and amide are readily isolated, and the undesired product can be recycled, making this process highly efficient. [108]

An external file that holds a picture, illustration, etc.
Object name is CCTC-12-6082-g024.jpg

The lipase‐catalyzed BASF process for the kinetic resolution of amines. Enantioselectivity is often essentially perfect and conversions quantitative, the amide and amine can be separated by distillation, the amide is readily hydrolyzed (giving access to both enantiomers), and the undesired enantiomer can be racemized and recycled. The process can also be run neat (e. g. in the case of 1‐methoxy‐2‐aminopropane). [108b] Other esters than the ethyl may be used; however, the methoxy‐group is critical for an efficient reaction.

6. “Smart” libraries and applications of enzyme engineering

While the BASF process uses a wild‐type enzyme, stabilized through immobilization, a strong interest in enzyme engineering has developed in industry. One major barrier to directed evolution is screening. In Frances Arnold's original paper, [94a] the enzyme was secreted from colonies of bacteria and digestion of the substrate in plates could be easily observed by a decrease in turbidity, thus allowing the screening of a large number of variants relatively easily. However, in general screening is not straightforward and usually it is the bottleneck. This is tackled on two fronts: development of faster high‐throughput screens, as well as reduction of the size of libraries by increasing the proportion of hits. While the former is often highly specific to the (class of) enzymes being evolved, more general concepts exist for the latter. [109] For example, the structure of the enzyme may be used to assess which recombinations are more likely not to disrupt the overall fold of the enzyme, in a process called SCHEMA, [110] which can then be used to reduce the size of combinatorial libraries.

Advances in the understanding of protein structures as well as dynamics have increasingly allowed target residues to be identified with more reliability than was previously possible, reducing the need for random mutagenesis across the whole gene although it remains a valuable tool. [111] This is accomplished using increasingly sophisticated bioinformatic tools, to allow for the docking of substrates into active sites, molecular dynamic simulations, and protein structure modeling, which can aid to predict the likely effect of potential mutations. One very powerful tool that has emerged is Rosetta, which also has been applied for the complete de‐novo design of proteins. [112] The availability of increasing numbers of structures of diverse enzymes within a given family, and the even larger availability of sequences (due to next generation high‐throughput sequencing technologies) allow points of natural variation to be identified which may then be targeted. The flexibility of residues as determined by X‐ray crystallography may also help identify target regions. [113] Alternatively, random mutagenesis might be used to identify hotspots which are then further investigated by more targeted mutagenesis. [114] In addition, the amino acids found in nature for a given position can inform which substitutions to include in a given library. [115]

Several residues may be targeted together, to increase the chance of detecting synergistic effects of mutations. One such approach is combinatorial active site testing (CASTing), developed by Manfred Reetz, [116] whereby multiple residues lining the active site are saturated at the same time, allowing for synergistic effects between mutations to emerge. This has been particularly successful in changing enantioselectivities and substrate scopes of enzymes.[ 107 , 117 ] Amine dehydrogenases (AmDH) were created from amino acid dehydrogenases in this way. [118] Several sites of interest (each potentially consisting of multiple residues) may be targeted sequentially, in a process call Iterative Saturation Mutagenesis (ISM), also developed by Reetz. [119]

Statistical tools and machine learning are also a powerful way to increase the efficiency of directed evolution, [120] such as the use of protein sequence activity relationships (ProSAR). [121] In an initial library, mutations are classified as beneficial, neutral, or detrimental and can inform which mutations to incorporate into subsequent libraries, as opposed to taking the best overall variant and generating a new library. This strategy was successfully applied by Codexis in the engineering of a halohydrin dehalogenase (HHDH) for the synthesis of ( R )‐4‐cyano‐3‐hydroxybutyrate, a key intermediate for the synthesis of atorvastatin, a cholesterol‐lowering drug. [122] Overall, the volumetric productivity was improved 4000‐fold over 18 rounds of evolution and 35 mutations were introduced, meeting the process requirements for the enzyme. The authors note that half of the mutations introduced in the final variant were initially not present in the best variant when selected and would have been missed in a hit‐based approach. While this approach can reduce screening efforts, it requires a larger sequencing effort. However, this has become increasingly possible as the cost of DNA sequencing has steadily declined. [81]

Codexis and Merck combined several of these approaches to engineer a transaminase for the synthesis of sitagliptin. Starting from an enzyme with no activity toward the substrate and minimal activity toward a truncated analogue, 11 rounds of engineering (Figure  14 ) led to a catalyst that outcompeted the alternative rhodium‐catalyzed reductive amination process in terms of efficiency, yield, enantioselectivity, and waste formation. Overall, 27 mutations were introduced using a combination of site‐saturation mutagenesis, combinatorial libraries (including diversity from homologous sequences), proSAR, and epPCR, screening a total of 36480 variants. [123] Highlighting that, even with the use of tools to maximize the efficiency of evolution, a huge screening effort may still be required for significant catalyst improvements.

An external file that holds a picture, illustration, etc.
Object name is CCTC-12-6082-g016.jpg

Evolution of ATA‐117 to produce sitagliptin, [123] compared to the chemocatalytic route using a rhodium catalyst. Over 11 rounds of evolution, the conditions of the screening (substrate loading, temperature, cosolvent concentration (Rd 3–6 MeOH, otherwise DMSO) were gradually increased to the process level. Overlaid is the steady increase in conversion under process conditions, as well as the increase in the total number of mutations (note, several mutations changed throughout the process).

In another more recent example, GSK evolved an imine reductase (IRED) to meet the process requirements for the synthesis of the LSD1 inhibitor GSK2879552, currently in clinical trials. [124] Screening of their in‐house panel (of at least 85 IREDS) [125] revealed a suitable candidate for mutagenesis. Given the scarcity of structural data on IREDs and that their highly dynamic mechanism is not fully understood, an initial round of site‐saturation mutagenesis was carried out on 256 out of 296 positions. Beneficial mutations from that round were then used to generate combinatorial libraries, which were then analyzed using the proprietary CodeEvolver software from Codexis. Statistical analysis was performed to identify pairwise interactions of beneficial mutations which were then included in another combinatorial library in a final third round of evolution, yielding an enzyme with 13 mutations that met or exceeded the process requirements, resulting in improved sustainability metrics over the previous route (Figure  15 ). The enzyme was then used to synthesize 1.4 kg of GSK2879552 for use in additional rounds of clinical trials.

An external file that holds a picture, illustration, etc.
Object name is CCTC-12-6082-g017.jpg

Engineering of an IRED for the synthesis of GSK2879552, and the alternative chemical route. Insert: improvement of the catalyst over 3 rounds of evolution; acceptable operating space (black dotted line), wild type IR‐46 (grey), M1 (orange), M2 (green), M3 at small scale (blue) and process scale (red). Adapted from Schober et al . [124]

Increasingly, directed evolution has also been applied to create enzymes carrying out reactions not observed in nature, by exploiting the promiscuous nature of enzymes (Figure  16 ). Frances Arnold's group reported on the evolution of a cytochrome c (cyt c), [126] a protein without any catalytic role in nature, to form carbon‐silicon bonds, a reaction not observed in nature. After screening several P450 enzymes, myoglobins, and cyt c variants, they identified a cyt c from Rhodothermus marinus with low levels of catalytic activity for this reaction. Iterative site‐saturation mutagenesis of just three key residues – an iron coordinating methionine, and two additional residues close to the heme group – resulted in a catalyst with a total turnover number (TTN) of >1500, a>33‐fold improvement over the wild‐type (wt) and a >375‐fold improvement over free heme, outperforming the best chemical catalysts for this reaction. In addition, the turnover frequency (TOF) was increased 7‐fold, the reaction proceeded with nearly perfect enantioselectivity, and was chemoselective for carbene insertion into silanes over alcohols and amines (Figure  16 ).

An external file that holds a picture, illustration, etc.
Object name is CCTC-12-6082-g018.jpg

Top: Exploiting enzyme promiscuity to evolve new catalytic activity. Enzymes often exhibit promiscuous activity toward non‐native substrates or reactions. By applying directed evolution, a “specialist” enzyme might be transformed into another specialist enzyme for the new activity, at the cost of diminishing its original function. Such a transformation proceeds through a “low‐fitness valley” where the enzyme is not very good at either the new or the original function. Figure reproduced from ref. [93] Bottom: This concept was applied to evolve a cytochrome c from Rhodothermus marinus (without any native catalytic function) to catalyze Si−H carbene insertions. [126] The final variant was 33x more active than the parent and became more specialized for Si−H insertion over N−H insertion chemistry, both promiscuous activities of the wt.

Clearly, in addition to the generation of efficient libraries, the choice of template was key in both examples above. Indeed, modern genomics has created huge databases of genes and it has become incredibly easy to identify new sequences that likely have a given catalytic function. The dramatic reduction in cost of synthetic genes has made it possible to create panels of these sequences, in a way producing a “smart” library of sequences already pre‐selected by natural evolution. The likely function of a DNA sequence may be determined from sequence similarity to other proteins of known function (using search algorithms such as BLAST), [127] as well as the identification of motifs. [128] In addition, structure based searches of crystal structures of unknown function may yield new biocatalyst. [129]

7. Enzyme immobilization and flow chemistry

Protein immobilization, which, as already outlined above, is a key strategy for enzyme stabilization and reusability, has also become more advanced and a whole plethora of different strategies and supports are available (Figure  17 ). Those are needed in part because protein immobilization can be highly unpredictable, and because of application‐dependent requirements on the immobilized catalyst (such as particle size, swelling, hydrophilicity/hydrophobicity, etc.). Enzyme immobilization may be mechanical or physiochemical, the latter can be further divided into covalent or noncovalent/adsorption immobilization. Immobilized proteins may also be used in continuous (i. e. flow chemistry) processes, which is particularly attractive for its scalability, improved efficiency, and increased control. [130]

An external file that holds a picture, illustration, etc.
Object name is CCTC-12-6082-g019.jpg

Examples of enzyme immobilization strategies. A) Mechanical entrapment restricts the diffusion of the enzyme. B) Adsorption through ionic interactions, offering little control over the orientation of the enzyme. C) Adsorption though affinity, in this case His‐tag‐metal coordination, allowing control of over the orientation of the enzyme through the tag placement. D) Covalent attachment, offering little control over the orientation of the enzyme. Multipoint attachment can lead to irreversible deformation of the enzyme shape. Common functional groups for covalent attachment are carboxylic acids, aldehydes, and epoxides – using amide formation, reductive amination, and ring opening, respectively. E) Affinity‐directed covalent immobilization orients the enzyme prior to covalent attachment. F) By fusing a (small) protein to the enzyme, covalent immobilization and any shape disruption can be localized to that fusion protein, reducing the effect on the enzyme. However, such an enzyme is more exposed to the environment and stability benefits from immobilization may be diminished. Not shown are covalent crosslinking of enzymes (e. g. using a dialdehyde), and the inherent different properties of supports, with respect to e. g. their size, pore‐size, hydrophilicity/hydrophobicity, etc.

Mechanical immobilization relies on the entrapment of the enzyme in a matrix that restricts its movement (Figure  17 A). This has the advantage that the enzyme itself is not being modified, while allowing for its environment to be fine‐tuned. However, mass transfer to and from the enzyme is often impaired. [131] Furthermore, leaching of the enzyme can occur. [132] Adsorption onto a solid support is another simple immobilization strategy. However, it too often suffers from leaching of the enzyme from the support. It its simplest form (hydrophobic, hydrophilic or ionic interactions between support and enzyme. Figure  17 B), no control over the orientation of the enzyme is achieved and the entrance to the active site may become blocked. [133] One of the most widely used biocatalysts, CalB, is immobilized in this way (Novozym 435), through hydrophobic interactions between the enzyme and an acrylic resin support. While leaching is an issue in an aqueous environment, this is suppressed in the organic solvents in which it is usually used. [134]

More specific adsorption is possible with the use of tags. Attached at either the N‐ or C‐terminal of the protein, they can help orient the enzyme in a favorable position. Examples include the use of a polyhistidine‐tag (His‐tag, Figure  17 C), originally developed for efficient protein purification in 1988, [135] which coordinates to transition metal cations, streptavidin with its remarkably high affinity for biotin, and sugar‐lectin interactions. While a His‐tag is encoded genetically, the other two examples require biotinylation or glycosylation, respectively. This also allows for enzyme purification and immobilization to be combined into a single step. While these interactions are stronger than the simple adsorption described above, low levels of leaching can still occur and pose problems for applications in flow, where any enzyme leaching from the column will be lost (as opposed to batch processes where temporarily detached enzyme remains in the vicinity of the support and can, in principle, reattach). One example are EziG beads, made of controlled porosity glass which has been modified to coordinate to metal ions. [136a] They appear to be predominantly used in organic solvents, [136] which appear to suppress leaching although use in an aqueous environment without leaching has also been reported (using Fe 3+ as the cation). [137]

The problem of leaching can be fully avoided using covalent immobilization (Figure  17 D). However, it often results in severe distortions of the enzyme and loss of activity, although this is highly dependent on the enzyme and support and often unpredictable. Leaching may also still occur for multimeric proteins if not all subunits are covalently attached. Moreover, once the enzyme has degraded the support cannot be reused whereas with adsorption the enzyme may be desorbed and replaced with fresh enzyme. [134] The orientation of the enzyme may be controlled by initially adsorbing the enzyme onto the support using tags, followed by the formation of the covalent attachment (Figure  17 E). The distortion of the enzyme can be alleviated using small protein tags, with the covalent attachment points being on that protein rather than the enzyme itself (Figure  17 F). [138] However, as the enzyme is more exposed to solvent, the stabilizing effect of immobilization is reduced. One elegant tag directed covalent immobilization is the use of the SpyTag/SpyCatcher system, a peptide and protein that spontaneously form an isopeptide bond when coming together. [139]

Enzymes may also be covalently cross‐linked into cross‐linked enzyme aggregates (CLEAs), an immobilization without a support. However, the poorly defined properties (such as particle size) of these aggregates often render them unsuitable for flow applications. However, adsorption of these aggregates onto a support with more defined properties can alleviate this issue. Glucose isomerase, immobilized in this way, is being used for the production of HFCSs in flow on a 10 million tons per year scale, using 500 tons of the immobilized catalyst.[ 45f , 108b , 140 ] Regardless of the immobilization technique used, tuning the characteristics of the support with respect to e. g. their size, pore‐size, hydrophilicity/hydrophobicity, etc. is also key. For example, tuning the composition of the support reduced the synthesis/hydrolysis (S/H) ratio of covalently immobilized penicillin acylase (Scheme  5 ). This was a key step in rendering it suitable for the kinetically controlled synthesis of various semi‐synthetic β‐lactam antibiotics.[ 44 , 47 , 78 ]

An external file that holds a picture, illustration, etc.
Object name is CCTC-12-6082-g025.jpg

Competing acyl transfer (red) and hydrolysis reactions (blue) catalyzed by Penicillin acylase. By tuning the characteristics of the support, synthesis can be kinetically favored over the thermodynamically favored hydrolysis reaction.

In some cases, subunit dissociation followed by leaching can significantly reduce the operational stability of covalently immobilized biocatalysts. In those cases, coating the biocatalysts either before or after immobilization with other polymers, such as polyethylenimine (PEI) or activated dextran can help maintain the quaternary structure. [131] However, as with any immobilization, excessive rigidification of the enzyme can result in a loss of catalytic efficiency if structural rearrangements necessary for catalysis are impeded. In addition, the increased complexity of more sophisticated immobilization techniques often outweighs any benefits bestowed on the catalyst. In general, the simplest catalysts that can meet the process requirements is the preferred one.

Synthetic cascades, without the need for purification of intermediates are very attractive due to the reduced amount of waste that is produced. In addition, intermediates that are too unstable to be isolated may be telescoped to the next step, offering alternative routes to classical synthesis. Flow chemistry, being inherently modular by design, is a very important platform for such cascades. Sequential reactions can be compartmentalized, avoiding incompatibility between reagents as well as allowing the conditions to be fine‐tuned for each reaction. [130] Cascades involving biocatalysis may either be multiple enzymatic reactions combined in sequence, or chemo(catalytic) reactions combined with enzymatic reactions. [141]

A nice example of the former was demonstrated by Contente and Paradisi, [142] who developed a cascade in flow converting amines into alcohols, employing a transaminase and either an alcohol dehydrogenase (ADH) or ketoreductase (KRED) that had been immobilized covalently on epoxide functionalized methacrylate beads (Scheme  6 A). By compartmentalizing both catalysts, reaction temperatures and times were optimized independently for each step. In addition, in‐line purification steps allowed the removal of product. Recycling of the aqueous phase containing co‐factors and buffer salts was also demonstrated, reducing the overall amount of the cofactors required (from 1 : 100 to 1 : 2000) while also eliminating the aqueous waste stream. Uwe Bornscheuer's group demonstrated a Suzuki‐Miyaura coupling in batch to produce a biaryl ketone substrate for a subsequent transaminase‐catalyzed amination in flow (Scheme  6 B). [143] Here, the ability of the transaminase reaction to tolerate 30 % ( v/v ) DMF as well as salts and palladium from the first reaction step was key. Compatibility between palladium and enzyme catalysts can be a problem, as was the case for a halogenase‐Suzuki‐coupling cascade in batch reported by Latham et al (Scheme  6 C). [144] Using free enzyme, ultrafiltration or compartmentalization with a semi‐permeable membrane had to be used to physically separate the enzyme and Pd catalyst. Alternatively, immobilization of the halogenase into CLEAs was also successful.

An external file that holds a picture, illustration, etc.
Object name is CCTC-12-6082-g026.jpg

Three examples of enzymatic cascades. A) Transformation of amines into alcohols, using an immobilized transaminase and either an ADH or KRED in flow. [142] B) A Suzuki cross‐coupling to produce a bi‐aryl ketone which is then aminated using a transaminase catalyst. The transaminase had to tolerate 30 % DMF carried over from the cross‐coupling, as well Pd catalyst, excess base, and unreacted boronic acid. [143] C) halogenation of aromatic compounds using a halogenase, followed by a Suzuki coupling. The enzyme had to be separated, either by ultrafiltration, immobilization, or compartmentalization from the Pd catalyst. [144]

In another collaboration between Codexis and Merck, a three‐step, nine‐enzyme cascade to synthesize the HIV drug islatravir was developed (Figure  18 ). [145] This involved engineering of five enzymes to accept unnatural substrates, as well as enzyme immobilization to simplify the final purification. For this, a cost‐effective affinity immobilization using a His‐tag was chosen for the first two steps, while the last step used free enzymes. Enzymes from seven organisms were used, and each step involved enzymes from either two or three organisms and one or three evolved enzymes. Thus, by bringing together the right enzymes from the right organisms (a testament to the vast number of genome sequences that are available) and applying directed evolution only were needed, the number of steps in the synthesis of islatravir was cut more than in half (from 12–18 steps), and the overall yield was almost doubled (51 % vs 37 % previously reported [146] ). Atom economy was improved, overall waste was reduced, and hazardous reagents (such as a Birch reduction in Fukuyama et al .’s synthesis [146] ) were avoided.

An external file that holds a picture, illustration, etc.
Object name is CCTC-12-6082-g020.jpg

Nine‐enzyme cascade to produce the HIV drug islatravir. Five enzymes had to be evolved. Compared to a chemical synthesis, steps were reduced by more than half and yield was almost doubled. No purification of intermediates was necessary. Immobilized enzymes shown attached to spheres. Figure adapted from Huffman et al . [145]

8. Summary and Outlook

Through the elucidation of enzymatic mechanisms and enzyme structures, as well as the development of powerful tools for DNA manipulation, engineered enzymes are being applied in increasingly complex syntheses. However, challenges remain. Enzyme engineering is time consuming and while excellent enzyme variants have been created, it is not always successful. Even though significant advances have been made in the understanding of protein folding and predicting the effects of mutations, we still rely on the principles of directed evolution developed by Frances Arnold. Further research into the properties of enzymes is necessary to increase their predictability, which will lead to a future where enzymes can be applied more routinely.

Promising approaches include the use of increasingly sophisticated machine learning, based on large sets of sequences of known function (either wild‐type sequences or mutants).[ 120 , 147 ] Exploring a large sequence space has become increasingly possible due to the advances in gene synthesis, building on DNA synthetic strategies and improvements in DNA sequencing mentioned earlier. [148] Understanding the trajectories of substrates in addition to their interactions once inside the active site (the aim of traditional substrate docking) allows new target residues to be identified. [149] Additionally, the de‐novo design of proteins has offered a window into entirely new protein sequences unknown in nature. In addition to offering exciting new catalysts, this is also an invaluable way of testing our understanding of protein folding and the factors that influence catalytic efficiency. [150] Expanding the genetic code to include unnatural amino acids with functional groups not found in nature may further increase enzyme performance and open up new reactions currently outside of the scope of biocatalysis. [151] Synthetic biology, which focusses on the introduction of new metabolic pathways into organisms to produce valuable chemicals from cheap and renewable starting materials, is another exciting emerging field. [152]

More fundamental challenges also remain, such as protein expression which is often difficult to predict. While E. coli has been undoubtedly the expression organism of choice, due to the vast number of molecular biology tools available and the ease with which it can be grown, some proteins cannot be expressed at high levels or in soluble form and, recently, concerns about low levels of endotoxins that are sufficient to cause an immune response have been raised. [153] Thus, alternative expression systems are needed such as fungi and extremophilic archaea to allow for the expression of proteins incompatible with E. coli and bypass potential toxicity. [154]

Lastly, it is still extremely challenging to bring a biocatalytic process to market. Often, even heavily engineered enzymes fall short in terms of space‐time‐yield compared to the best heterogeneous catalysts (particularly challenging for bulk chemicals). Additionally, a significant investment of both time and money is necessary to develop a biocatalytic process. This is especially true for synthetic biology, e. g. in the case of the anti‐malarial drug artemisinin which required 10 years of research and >$150 million to engineer an organism that could produce it. [155] Clearly, additional research to address these issues is needed and a closer interaction between academia and industry could further speed up the process. Nonetheless, the many examples of successful biocatalytic processes mentioned in this review (as well as many not mentioned) highlight the power of enzymes and the bright future of the field of biocatalysis.

Conflict of interest

The authors declare no conflict of interest.

Biographical Information

Growing up in Germany, Christian M. Heckmann studied Chemistry at the University of Nottingham, earning his MSci in 2017. He then started a PhD with Francesca Paradisi in collaboration with Johnson Matthey, studying biocatalytic approaches to the synthesis of chiral amines .

An external file that holds a picture, illustration, etc.
Object name is CCTC-12-6082-g001.jpg

Prof. Francesca Paradisi is the Chair of Pharmaceutical and Bioorganic Chemistry at the University of Bern. Biocatalysis as a sustainable approach to synthesis of valuable products is the focus of her research group. In particular, the group developed a number of enzyme‐based processes in continuous flow, reducing the gap between academic discovery and industrial application .

An external file that holds a picture, illustration, etc.
Object name is CCTC-12-6082-g002.jpg

Acknowledgements

This work was supported by the Biotechnology and Biological Sciences Research Council through the iCASE scheme in collaboration with Johnson Matthey [grant number BB/M008770/1].

C. M. Heckmann, F. Paradisi, ChemCatChem 2020 , 12 , 6082. [ PMC free article ] [ PubMed ]

Site Logo

FST 123 - An Introduction to Enzymology

COURSE GOALS:  Food Science 123 is designed to give students an understanding of the physical, chemical and kinetic properties of enzymes. Purification, characterization and quantitative evaluation of the influence of parameters such as concentrations of substrate and enzyme, pH, temperature and inhibitors on activity are stressed. Specificity and mechanism of action of enzymes are described by considering examples selected from among enzymes of importance to food science, nutrition, and to the biological sciences. ENTRY LEVEL:  BIS 102 and 103 COURSE FORMAT:  The course is presented in two 1 1/2 hour lectures per week. Discussion sessions are available if requested. Grading is based on two midterms - 100 points each, final - 200 points, and several problem sets. Examination questions are predominantly essay and problems. All past examination questions are available for study in Food Science Library and Main Library. Grading on absolute percentage, not curve. TOPICAL OUTLINE:  Approximate number of 1 1/2 hour lectures in parenthesis:

  • Introduction, historical highlights (1)
  • Enzymes as proteins; their structure (1)
  • Methods of purification (3)
  • Substrate and enzyme concentration relationships (2)
  • Midterm (1)
  • pH effects (2)
  • Temperature effects (1)
  • Inhibitors of enzymes, their importance (1)
  • Nature and role of cofactors (1)
  • Classification of enzymes (1)
  • Hydrolytic enzymes - importance, properties and mechanism of action of selected enzymes (2)
  • Oxidative enzymes - importance, properties and mechanism of action of selected enzymes (2)
  • Completion of course: analysis of course; questions (1)
  • Final 

The St. Petersburg paradox despite risk-seeking preferences: an experimental study

  • Original Research
  • Open access
  • Published: 26 October 2018
  • Volume 12 , pages 27–44, ( 2019 )

Cite this article

You have full access to this open access article

enzymology lab experiments

  • James C. Cox 1 ,
  • Eike B. Kroll 2 ,
  • Marcel Lichters 2 ,
  • Vjollca Sadiraj 1 &
  • Bodo Vogt   ORCID: orcid.org/0000-0002-6610-2515 2  

6004 Accesses

6 Citations

Explore all metrics

The St. Petersburg paradox is one of the oldest challenges of expected value theory. Thus far, explanations of the paradox aim at small probabilities being perceived as zero and the boundedness of utility of the outcome. This paper provides experimental results showing that neither diminishing marginal utility of the outcome nor perception of small probabilities can explain the paradox. We find that even in situations where subjects are risk-seeking, and zeroing-out small probabilities supports risk-taking, the St. Petersburg paradox exists. This indicates that the paradox cannot be resolved by the arguments advanced to date.

Similar content being viewed by others

Risk writ large.

enzymology lab experiments

Asymmetry of Risk and Value of Information

Precis of risk and rationality.

Avoid common mistakes on your manuscript.

1 Introduction

The extent to which generations of researchers raise a certain issue is an indicator of its importance. One issue that researchers have repeatedly debated is a unified explanation for play of the St. Petersburg game, a paradox that has attracted researchers’ interest for 300 years (Neugebauer 2010 ; Seidl 2013 ). In the original version of the St. Petersburg Game, a fair coin is tossed until it comes up heads for the first time. The game pays 2 n with n indicating the number of tosses it takes for the first occurrence of heads. While the St. Petersburg game in its original version offers an infinite expected value, people are found to pay no more than $25 for hypothetical offers to participate in the game, which may be regarded as irrational decision-making (Hacking 1980 ).

The paradox unfolds an immense importance in many diverse fields such as statistical decision analysis (Seidl 2013 ), econophysics (Takayasu 2010 ), business ethics (Shrader-Frechette 1985 ), economics (Rieger and Wang 2006 ; Samuelson 1960 ), psychology (Mishra 2014 ; Chechile and Barch 2013 ), and even evolutionary theory (Real 1980 ), in particular animals’ foraging decisions studied in zoology (Real 1996 ). In the business research context, researchers routinely refer to the paradox when discussing and analyzing, for example, crashes of high-tech stocks (Székely and Richards 2004 ) and more generally the extent to which speculator psychology has attained an irrational bent at stock markets (Durand 1957 ), optimal contracting decisions (Mookherjee and Png 1989 ), optimal portfolio strategies (Rubinstein 2002 ), newsvendor order decisions (Wang et al. 2009 ), and loss aversion in consumer behavior (Camerer 2005 ).

Against this background, modeling play of the St. Petersburg game is essential in many fields. Although several explanations are proposed as solutions to this paradox, there is no general agreement on a unified model valid for different versions of the paradox. This article adds another layer of complexity to the topic, in that we present experimental evidence that shows similar play in finite St. Petersburg games with conventional positive prizes (2 n ) and negative prizes (− 2 n ), as monetary payments but also in waiting time. These results are not compatible with loss aversion. Our data suggest that St. Petersburg games elicit similar choices that are invariant to the sign of prizes. As such, we hope our results will contribute to the ongoing discussion of the topic by providing new evidence on decisions about play of St. Petersburg games when prizes are negative, an issue that has been overlooked in the literature.

2 Literature review and experiment overview

Various researchers have provided explanations for the St. Petersburg paradox (e.g., Seidl 2013 ; Rieger and Wang 2006 ; Blavatskyy 2005 ), but with every explanation a new version of the initial game was constructed that brought back the puzzle (Samuelson 1977 ). Versions of the game have been constructed that challenge all currently popular theories of decision under risk (Cox and Sadiraj 2009 ).

Historically, the first explanation for the observed behavior was decreasing marginal utility of the outcome (Bernoulli 1954 , originally published in 1738). However, for unbounded utilities, games can be constructed with prizes correcting for decreasing marginal utility and the paradox remains. Footnote 1

Therefore, the focus shifted towards the credibility of “infinity” (i.e., no bounds on potential prizes, or on the number of coin flips). Limited time was introduced to bind the value of the St. Petersburg game (Brito 1975 ; Cowen and High 1988 ). In contrast, it was argued that the value of the lottery could in principle be unbounded but the offer to play the game is most probably not considered to be credible (Shapley 1977 ), explaining the decision patterns found in experimental studies. The most favored solution of the paradox, however, is that utility is bound since otherwise one can always create lotteries leading to counterintuitive solutions (Aumann 1977 ). But bounding utility substitutes one paradox for another: with bounded utility, an agent will exhibit implausible large-stakes risk-aversion (Cox and Sadiraj 2009 ).

To avoid infinity, the St. Petersburg Game was broken down into a series of finite games, but the paradox does not disappear (Samuelson 1960 ), questioning infinity as the underlying cause of the paradox. Other work argues that perception of small probabilities of large payoffs is the source of the paradox since sufficiently small probabilities are regarded as zero (Brito 1975 ) or small chances for large prizes create big risks for the agents (Allais 1952 ; Weirich 1984 ) who are willing to buy the game at a large price. In another approach on using probabilities as an explanation for the phenomenon, more recent work introduced a new weighting function for cumulative prospect theory, attempting to solve the problem of infinity (Blavatskyy 2005 ).

Experimental research has turned to a modified version of the question originally posed by Bernoulli: Is human choice behavior in St. Petersburg lotteries consistent with expected value theory in finite versions of the game that involve few coin tosses, n ? Initial experiments followed the general idea that financial incentives provide subjects with an economic motivation for truthfully revealing ranking of available options (Camerer and Hogarth 1999 ). Thus, experiments based on enhanced designs (Cox et al. 2009 ; Neugebauer 2010 ) have used real-money payoffs and finite versions of the St. Petersburg game. Data from these experiments are inconsistent with risk-neutrality but consistent with risk-aversion for moderate n (> 5) providing support for Bernoulli’s general conclusion about risk-aversion (though not about a log specification) as the source of low willingness to pay for playing the game.

The present paper reports on two experiments that implement a modification of the St. Petersburg lotteries used in Cox et al. ( 2009 ), replacing positive prizes with negative ones. In Experiment 1, prizes in the St. Petersburg lotteries are − 2 n where n is the number of coin tosses it takes to (first) turn up a head, which determines how much subjects get paid to play the game. That means, instead of offering subjects an opportunity to pay money for participating in a St. Petersburg game with positive prizes, this experiment provides subjects with an opportunity to receive money payment for accepting an offer to participate in a St. Petersburg game in which they can only lose money. Because this sign-mirror version of the St. Petersburg game offers stochastic losses, people may make different decisions depending on whether they are risk-loving or risk-averse over losses, a topic on which there is mixed evidence (e.g., Kahneman and Tversky 1979 ; Holt and Laury 2002 , 2005 ; Åstebro et al. 2015 ). Previous research suggests that individuals tend to be risk-seeking when it comes to gambling involving the possibility to avert financial losses (e.g., Kahneman and Tversky 1984 ; Scholer et al. 2010 ; Rieger et al. 2015 ). Furthermore, if St. Petersburg play follows from sufficiently small probabilities being regarded as zero (Brito 1975 ) then we should expect larger willingness to play the game with negative prizes than with positive prizes.

For both the original version of the (finite) St. Petersburg lotteries taken from Cox et al. ( 2009 ) and the lotteries developed for this study, participants decide about tradeoffs between possible monetary gains and losses. In the n  = 3 version of the game, a subject who decides to participate pays €2.75 for sure and receives €2 (in the case the coin lands Head the first time), €4 in the event T → H (that is the first land is Tail and the second one is Head), €8 in the event T → T → H and €0 otherwise (that is, the event T → T → T). The expected value of participating in the n  = 3 version of the finite St. Petersburg game is €0.25.

In total, 80% of the subjects decided to pay €2.75 to participate in the game (Cox et al. 2009 ). In the n  = 3 version of the sign-mirror St. Petersburg game, the game of interest in this paper, a subject who decides to participate gets paid €3 for sure but pays back €2 in the event H, €4 in the event T → H, €8 in the event T → T → H, and €0 in the case of event T → T → T. (see Table  1 for n  = 4 and n  = 9). The expected value of participating in the n  = 3 sign-mirror version of the finite St. Petersburg game is €0. It is not clear how people will play this game in that the game participation is ex ante mixed (total payoff, which is prize plus payment for playing, can be positive or negative depending on the resolution of the risk), whereas the prizes that the game offers are all negative. Risk-aversion, by definition, implies rejection of the St. Petersburg game with negative prizes, S n for all n as the expected value of participation is zero. Furthermore, with expected utility, risk-aversion implies that the likelihood of rejecting St. Petersburg game increases with n . Footnote 2 This is consistent with the pattern observed in Cox et al. ( 2009 ) (but also in data from the new experiments reported in this paper). On the other hand, with loss aversion Footnote 3 acceptance of participation in the conventional St. Petersburg game implies rejection of its sign-mirror St. Petersburg game.

Because the (prize plus participation payment) domain of participation in a St. Petersburg game (with positive or negative prizes) is mixed, we conducted another experiment (Experiment 2), reported later in the paper, with same lotteries as in Experiment 1, but with the difference that negative money prizes are replaced by waiting times. Additionally, in another control treatment we elicited subjects’ preferences for waiting times using Holt and Laury’s ( 2005 ) procedure. Our subjects’ choices revealed risk-seeking attitudes when choosing waiting times in the control treatment, which implies that our subjects accept the negative St. Petersburg games (as the expected value of participation and nonparticipation is 0). In Experiment 2, we also observe decisions on participation in St. Petersburg lotteries constructed from the domain of waiting times used in the control treatment.

Data from the experiments reported in this paper show that decisions on participating in St. Petersburg games are invariant to the sign of prizes. If risk attitudes over negative payoffs are risk-seeking (a common finding in the literature as well as in our risk elicitation control treatment in Experiment 2), then data from experiments in this paper challenge stability of risk attitudes over negative prizes. Thus, the present research re-emphasizes a fruitful field of future inquiry on risk attitudes in negative domains of payoffs.

3 Experiment 1: the St. Petersburg game with monetary payments

3.1 experimental design and procedure.

Fifteen students from a major German university from different fields of study participated in the experiment. Subjects were invited by email and were informed that the experiment would spread over 2 weeks and that real losses could result from participation, but the losses could be avoided by individuals’ decisions. Subjects were also informed that, in the event that a participant realized a loss, she would have to pay it from her own money.

During the first meeting, the participants received a show-up fee of €10. The information from the invitation was read aloud to the subjects and, subsequently, the participants filled out a form stating they were fully aware of the information provided by the experimenter and they agreed to pay any losses that might occur from participation in the experiment.

The actual experiment on decisions about St. Petersburg lotteries was performed during the second meeting, 2 weeks later, which minimizes the possibility of introducing a bias resulting from house-money effects (e.g., Rosenboim and Shavit 2012 ; Thaler and Johnson 1990 ). The experiment consisted of the same lotteries used in Cox et al. ( 2009 ) with the only difference being that all payoffs were multiplied by the factor − 1 (i.e., a sign-mirrored version of a finite St. Petersburg original game). For a total of 9 lotteries, the participants could choose between participating in the lottery or not (see Table  1 for n  = 4 and n  = 9). The lotteries varied in the maximum number of coin tosses, n with n  = 1, 2,…,9. After the participants made their decisions for all of the 9 lotteries, for each participant one lottery was randomly chosen for realization.

A subject choosing to participate in a lottery, S n with maximum of n tosses received n euros. Then a coin was tossed until either a head occurred or the maximum number ( n ) of tosses for that lottery was reached. If the coin turned up heads on the i th ( i  <  n ) toss, the participant was required to pay €2 i .

If the coin did not turn up heads on any of the n tosses, the participant was not required to pay anything and her payoff remained at n euros. Thus, potential payoffs from the game participation are n , n  −2, n  −4,…, n  −2 n with probabilities ½ n , ½, ¼, …, ½ n . In case of a rejection, the subject receives no payment. The expected value of the St. Petersburg game, S n is −  n , and therefore the expected value of participation is 0. Risk-seeking individuals would prefer participation whereas risk-averse individuals would reject it. Footnote 4

3.2 Results of Experiment 1

The analysis of the data from our version of the St. Petersburg lotteries is adapted from the procedures used in Cox et al. ( 2009 ) to make comparisons. The analysis focuses on the proportion of subjects rejecting the St. Petersburg lottery for the maximum number of coin tosses, n . For the original form of the St. Petersburg lotteries involving positive prizes, researchers (Cox et al. 2009 ; Neugebauer 2010 ) found that the proportion of subjects rejecting a lottery increased with increasing numbers of maximum tosses of the coin, which is consistent with risk-aversion because increasing n increases the spread of payoffs while preserving the mean. We see the same pattern in the sign-mirror version of St. Petersburg game with negative prizes. Figure  1 shows choices for positive prizes (light grey bars) and negative prizes (dark grey bars). While the proportion of subjects rejecting the lotteries is higher on average with negative prizes, the pattern of increasing rejections appears in both games. Therefore, there is a pattern to be found when subjects make decisions about St. Petersburg lotteries that is robust to whether monetary prizes are positive or negative. Specifically, a significant correlation between the n and the number of subjects rejecting to play ( r Spearman  = 0.983, p  < 0.001) supports the notion of increasing rejection. Likewise, across n , the Pearson correlation between the rates of rejection reported in Cox et al. ( 2009 ) and Experiment 1 approaches 1 ( r Pearson  = 0.941, p  < 0.001). Additionally, whereas risk-seeking behavior suggests an increasing willingness to play with increasing n , we were not able to observe this tendency for any of the individuals ( p  < 0.001 in a binomial test). With S-shaped utility and loss aversion, acceptance of positive St. Petersburg game implies rejection of negative St. Petersburg game (see “ Appendix ”). This is rejected by data from games with 1, 2, and 3 tosses. Acceptance rates in the positive St. Petersburg game are [0.87, 0.87, 0.8]; rejection rates in the negative St. Petersburg game are predicted to be larger but we observe [0.4, 0.47, 0.67].

figure 1

Comparison of choices for positive or negative money prizes

3.3 Discussion of Experiment 1

The experiment’s results highlight that St. Petersburg game play is invariant to gain vs. loss framing of lotteries. Experiment 1, however, involves two limitations. First, literature proposes that subjects sometimes might have a hard time believing that potential financial losses realized in experiments become relevant for them (e.g., Thaler and Johnson 1990 ). Likewise, a special type of subject tends to be more prone to participate in experiments with potential monetary losses, creating the possibility of a selection bias (Etchart-Vincent and l’Haridon 2011 ). Both aspects could be problematic for the internal validity of findings. Footnote 5 Second, Experiment 1 drew on extant studies in assuming a risk-seeking tendency for gambles on financial losses. Therefore, we did not control whether our subjects’ risk attitudes over losses were risk-seeking.

Experiment 2 addresses the aforementioned issues, while simultaneously operating with a larger sample. This second investigation substitutes waiting times for financial losses, which previously has been shown to provide an appropriate way for integration of realized losses in the context of prospect theory experiments (e.g., Abdellaoui and Kemel 2014 ). In addition, a control treatment provides data, which reveals that subjects are risk-seeking in gambles over waiting times.

4 Experiment 2: waiting times

4.1 experimental design and procedure.

The experiment was conducted with 86 students from different fields of study enrolled at the same major German university as in Experiment 1. The participants were randomly divided into three groups, with one eliciting risk preferences for waiting time (Treatment 1, 36 participants), and two groups playing St. Petersburg lotteries with different base waiting times (Treatment 2, 25 participants, 10 min and Treatment 3, 25 participants, 45 min). All participants received a show-up fee of €8 before experiment instructions were handed out in the university’s behavioral economics laboratory. Furthermore, it was made clear that there would be no other monetary rewards from participation in the experiment other than the consequences described in the experiment instructions. Unlike Experiment 1, the second experiment did not draw on two separate appointments because subjects are unlikely to integrate a monetary show-up fee with waiting times into one mental account (Abdellaoui and Kemel 2014 ).

4.1.1 Risk preference for waiting time

To elicit preferences over waiting time, participants were asked to choose between two lotteries in each of ten pairs of lotteries where “payoffs” were determined as waiting time. After choices were made, one choice pair was randomly selected for realization (e.g., Lichters et al. 2016 ). In the experiment, participants were told that their decisions would determine a waiting time in the laboratory. This waiting time started after all decisions were made and the chosen lotteries were played out. The participants spent this waiting time in a laboratory cabin without any communication devices or books.

The options were presented in a format similar to the one used in Holt and Laury ( 2005 ), where option A offered less risk but a higher minimum waiting time (with a waiting time of either 30 or 40 min), and option B offered a higher risk but the chance of a much smaller waiting time (with a waiting time of either 5 or 60 min). The probability of the favorable outcome was always the same in the two options but varied between 0.1 and 1.0 as shown in Table  2 . Therefore, risk preferences for waiting time could be elicited for each participant by the row in which option B was chosen over A for the first time. If the switching point was in row four or earlier, the choice pattern would indicate risk-seeking behavior, if it was in row five or later, the choice pattern would show risk-averse behavior.

After the choices were made, the experimenter drew a ball from a bingo cage with balls labeled from one to ten that determined which decision was selected. Then the outcome from the lottery the participant chose for the decision in the corresponding row was realized and the waiting time started.

4.1.2 St. Petersburg game

Participants in these treatments were offered a series of St. Petersburg lotteries. All subjects had a base waiting time (Treatment 2, 10 min; Treatment 3, 45 min) and were offered an opportunity to participate in a lottery where this waiting time could be reduced or increased depending on their decision and the outcome of the lottery. This lottery was designed analogously to the St. Petersburg lottery used in Experiment 1 (and Cox et al. 2009 ). For participation in the lottery, the waiting time was first reduced by n minutes and subsequently a coin was tossed until a head occurred, with a maximum of n tosses. If head occurred on the i th toss, the waiting time was increased by 2 i minutes. Each participant was offered nine lotteries with only one decision being randomly selected for implementation, with the lotteries differing by the maximum number of tosses n . Table  3 shows potential waiting times for n  = 9 and base time 10 min, here we describe the case for n  = 3. Suppose decision 3 was randomly selected for implementation. If the subject had chosen not to play the game then his waiting time was at the base waiting time of 10 min (or 45 min in the 45 min base-time condition).

If the subject had chosen to play the game, then the base waiting time was reduced by 3 min–7 (or 42) minutes. Then, a coin was tossed. If it came up heads at the first toss (an event with probability ½), the waiting time was increased by 2–9 (or 44) min. If it came up heads at the second toss (an event with likelihood ¼), the waiting time was increased by 4 min–11 (or 46) min. If it came up heads at the third toss, the waiting time was increased by 8 min–15 (or 50) min. If the coin did not come up heads at any of the three tosses, the waiting time remained at 7 (or 42) min.

After the participants made their choices, the experimenter drew a ball from a bingo cage containing balls numbered from one through nine to select which game would be realized. Waiting time for all participants who chose not to play the selected game was the baseline time. For each participant who chose to play the game, the experimenter tossed the coin as described above and determined the actual waiting time. All participants spent their waiting time in a laboratory cabin without communication devices or other kinds of entertainment possibility. To control for scale and/or reference point effect (Farber 2008 ; Kőszegi and Rabin 2007 ), we ran two treatments with different base waiting times of 10 and 45 min.

4.2 Results of Experiment 2

4.2.1 risk preferences for waiting time.

As described above, subjects can be classified as risk-seeking or risk-averse for choices on waiting time by looking at the first row in which option B is chosen. In Table  2 , it can be inferred from the differences in expected values that risk-seeking individuals would choose option B for the first time in row four or earlier, while the switching point from option A to option B would be in row five or later for risk-averse subjects. Table  4 reports the frequencies for rows in which subjects switched to option B: any subject who chooses option A in rows one through three and chooses option B in rows four through ten is noted in column four, while any subject who chooses option A in rows one through four and then switches to option B is noted in column five.

We observe 27 out of 36 subjects (75.00%; 95% CI [0.58, 0.88]) who made choices consistent with risk-seeking behavior. Therefore, we conclude that the vast majority of subjects exhibit risk-seeking behavior when making decisions about stochastic waiting time ( p  = 0.002, binomial test).

4.2.2 St. Petersburg game

Data from elicitation of risk attitudes over waiting time suggest that the majority of subjects are risk-seeking and therefore would play all of the offered St. Petersburg lotteries for waiting time. The expected value of the offered gambles on waiting times is equal to the base waiting time. Therefore, a risk-seeking individual would choose to participate in all offered gambles. Data from the St. Petersburg game with waiting times as prizes show that while individuals do participate in the gambles for small reductions of the base waiting time, they do not for higher possible reductions of the base waiting time. Furthermore, decision patterns are similar to those found in the real-payoff experiment with St. Petersburg lotteries with positive money prizes (Cox et al. 2009 ) and to the Experiment 1 reported above, with negative money prizes (Fig.  2 ).

figure 2

Comparison of rejecting St. Petersburg lotteries of money risk-averse and time risk-seeking subjects

With increasing n , subjects exerted decreasing willingness to play. Within the 10-min condition (dark grey bars), the correlation between n and the number of subjects rejecting is r Spearman  = 0.880, p  = 0.002, whereas the 45-min condition (ocher bars) yields an r of 953, p  < 0.001. Like in Experiment 1 on monetary losses, both waiting time conditions show a pronounced correlation with the results reported by Cox et al. ( 2009 ) on positive monetary prizes ( r Pearson, 10 min  = 0.922, p  < 0.001, r Pearson, 45 min  = 0.937, p  < 0.001). Again, we were hardly able to observe risk-seeking-conform behavior at the individual level (increasing willingness to play with increasing n ), either in the 10-min condition (3/25, p  < 0.001 in a binomial test) nor in the 45-min condition (1/25, p  < 0.001 in a binomial test).

In the treatment with a base waiting time of 10 min, 2 of the 25 participants chose to never play the game, while the rest mostly started by playing the first game, but switched to answering ‘no’ at some n . None of the participants chose to play all offered games. The data from the treatment with a base waiting time of 45 min yielded similar results. Fewer subjects were willing to play lotteries with high reductions of the base waiting time ( n  ≥7) than observed in the 10-min treatment. However, the difference is not meaningful, as we were not able to identify significant differences based on Fisher’s exact tests: p n  = 7  =0.609, p n  = 8  =0.609, p n  = 9  =1.0.

5 Discussion and conclusion

The St. Petersburg paradox was initially designed to challenge expected value theory. It inspired the idea of using utilities, with decreasing marginal utility of money, rather than monetary payoffs to resolve the paradox. From that point on, economists have focused on developing theories of decision-making under risk that can accommodate risk-averse behavior as initially reported for hypothetical experiments with the infinite horizon St. Petersburg lottery. Alternative ways of modeling risk-aversion have been proposed in the literature (for a review of mathematical model attempts see, Seidl 2013 ). Risk-averse behavior can be modeled by nonlinear transformation of: (1) payoffs (Bernoulli 1954 ; Pratt 1964 ; Arrow 1971 ), (2) probabilities (Yaari 1987 ), or (3) both (Quiggin 1993 ; Tversky and Kahneman 1992 ). But there are versions of St. Petersburg lotteries that produce paradoxes for all of these theories if they are defined on unbounded domains (Cox and Sadiraj 2009 ). If, instead, the domain is bounded, then the paradox disappears. For example, if the largest credible prize offered in a St. Petersburg lottery is 35 billion euros, then the expected value of the lottery is about 35 euros and it would not be paradoxical if people were unwilling to pay large amounts to play.

Although the original infinite horizon St. Petersburg game cannot credibly be offered to anyone in a real-payoff experiment, finite versions are still of interest for elicitation of risk preferences. Real-money experiments with positive prize St. Petersburg lotteries produced data that are inconsistent with risk-neutrality but consistent with risk-aversion (Cox and Sadiraj 2009 ; Neugebauer 2010 ). Data from the first experiment in this paper implement a setting in which subjects are believed to be risk-loving. It is interesting to see that St. Petersburg lotteries elicit similar behavior with negative or positive monetary prizes. Even more strikingly, our second experiment on the St. Petersburg game, with waiting times as prizes, elicits similar patterns of behavior even though most subjects appear to be risk seekers in this domain. The payoff domain invariance of behavior in these St. Petersburg games provides a challenge for decision theory under risk.

While the hypothesis of infinity (Brito 1975 ; Cowen and High 1988 ), associated with the original form of the game, is not directly relevant for our finite St. Petersburg games, the concern on whether participants would regard the offer of the original game as credible (Shapley 1977 ) remains valid. Thus, while infinity is not the problem in our games, is it possible to control whether participants regarded the offer as genuine?

If subjects would regard an offer as not credible, it is reasonable to assume that high monetary losses or the longer waiting times would not be realized and, therefore, a subject playing the lotteries with relatively large maximum number of coin tosses, n would prefer participation in St. Petersburg game with negative prizes. That is not what we observe in the experiment.

Other papers argue that very small probabilities are regarded as zero (Brito 1975 ) or that small probabilities for high wins result in a high risk for the decision maker (Allais 1952 ; Weirich 1984 ) with a large payment. While these arguments can explain increasing pattern of rejections for positive St. Petersburg game, they fail badly in explaining increasing pattern of rejections of participation that we observe in our data from St. Petersburg games with negative prizes. This is so as nullification of small probabilities or increasing risk for a risk-seeker (as revealed in Holt–Laury elicitation treatment in Experiment 2) make participation in the game with negative prizes more attractive as n increases, so we should observe a decreasing pattern of rejections as n increases, the opposite of what we see in our data, with monetary payment as well as waiting time.

In conclusion, it can be noted that various conjectures have been advanced to explain behavior with St. Petersburg lotteries. However, with the exception of risk-aversion, none of the conjectures can explain what is observed in our experiments with decisions for real monetary and waiting time outcomes. Nevertheless, while risk-aversion can explain behavior in our St. Petersburg game, such risk-aversion in the waiting time experiments conflicts with risk-seeking attitudes elicited by the Holt–Laury procedure.

As our results are both, surprising and relevant to business research, future researchers should empirically reevaluate models that have been proposed to explain the behavior observed in the St. Petersburg paradox relying on experiments involving real economic consequences. Likewise, other domains of seemingly irrational decision-making would benefit from experiments drawing on real consequences involving losses. For example, in case of violations of rational choice such as context-dependent choice (Cox et al. 2014 ), researchers recently discovered that some choice anomalies decrease (e.g., the compromise effect, Lichters et al. 2015 ) while others increase (e.g., the attraction effect, Lichters et al. 2017 ) in magnitude with the introduction of economic consequences instead of mere hypothetical choices.

For log utility, replace payoffs 2 n with payoffs exp(2 n ) and the paradox returns. More generally, for an unbounded utility, u replace payoffs 2 n with payoffs given by the inverse function, u − 1 (2 n ).

It is straightforward (available upon request), though tedious, to show that for all n in our experiment, participation in S n is a mean-preserving spread of participation in S n  − 1 . Figure  3 in the “ Appendix ” provides a visualization for the case of St. Petersburg games with n  = 3 and n  = 4.

See the “ Appendix ” for a proof when the functional is linear in probabilities.

For illustration purposes, in the “ Appendix ” Table  5 , we provide valuations of game participation for the special case of S-shaped utility of outcomes with power 0.5. For this example, the utility of participation decreases in n for positive St. Petersburg game but increases in n for negative St. Petersburg game. Therefore, we expect opposite patterns of likelihood of rejections: increasing for positive prizes but decreasing for negative prizes.

We would like to thank an anonymous reviewer for making this suggestion.

Abdellaoui, Mohammed, and Emmanuel Kemel. 2014. Eliciting prospect theory when consequences are measured in time units: “Time Is Not Money”. Management Science 60 (7): 1844–1859. https://doi.org/10.1287/mnsc.2013.1829 .

Article   Google Scholar  

Allais, Maurice. 1952. Le comportement de l’homme rationnel devant le risque: Critique des postulats et axiomes de l’école américaine. Econometrica 21 (4): 503–546.

Arrow, K.J. 1971. Essays in the theory of risk-bearing . Chicago, IL: Markham Pub. Co.

Google Scholar  

Åstebro, Thomas, José Mata, and Luís Santos-Pinto. 2015. Skewness seeking: Risk loving, optimism or overweighting of small probabilities? Theory and Decision 78 (2): 189–208. https://doi.org/10.1007/s11238-014-9417-4 .

Aumann, Robert J. 1977. The St Petersburg paradox: A discussion of some recent comments. Journal of Economic Theory 14 (2): 443–445.

Bernoulli, Daniel. 1954. Exposition of a New Theory on the Measurement of Risk. Econometrica 22 (1): 23–36.

Blavatskyy, Pavlo R. 2005. Back to the St. Petersburg Paradox? Management Science 51 (4): 677–678. https://doi.org/10.1287/mnsc.1040.0352 .

Brito, Dagobert L. 1975. Becker’s theory of the allocation of time and the St. Petersburg paradox. Journal of Economic Theory 10 (1): 123–126.

Camerer, Colin. 2005. Three Cheers—Psychological, Theoretical, Empirical—for Loss Aversion. Journal of Marketing Research 42 (2): 129–133. https://doi.org/10.1509/jmkr.42.2.129.62286 .

Camerer, Colin F., and Robin M. Hogarth. 1999. The effects of financial incentives in experiments: A review and capital-labor-production framework. Journal of Risk and Uncertainty 19 (1–3): 7–42. https://doi.org/10.1023/A:1007850605129 .

Chechile, Richard A., and Daniel H. Barch. 2013. Using logarithmic derivative functions for assessing the risky weighting function for binary gambles. Journal of Mathematical Psychology 57 (1–2): 15–28. https://doi.org/10.1016/j.jmp.2013.03.001 .

Cowen, Tyler, and Jack High. 1988. Time, bounded utility, and the St. Petersburg paradox. Theory and Decision 25 (3): 219–223. https://doi.org/10.1007/BF00133163 .

Cox, James C., and Vjollca Sadiraj. 2009. Risky Decisions in the Large and in the Small. In Risk aversion in experiments , ed. James C. Cox and G.W. Harrison, 9–40. Bradford: Emerald Group Publishing.

Cox, James C., Vjollca Sadiraj, and Ulrich Schmidt. 2014. Asymmetrically dominated choice problems, the isolation hypothesis and random incentive mechanisms. PLoS One 9 (3): e90742. https://doi.org/10.1371/journal.pone.0090742 .

Cox, James C., Vjollca Sadiraj, and Bodo Vogt. 2009. On the empirical relevance of St. Petersburg lotteries. Economics Bulletin 29 (1): 214–220.

Durand, David. 1957. Growth Stocks and the Petersburg Paradox. The Journal of Finance 12 (3): 348–363. https://doi.org/10.1111/j.1540-6261.1957.tb04143.x .

Etchart-Vincent, N., and O. l’Haridon. 2011. Monetary incentives in the loss domain and behavior toward risk: An experimental comparison of three reward schemes including real losses. Journal of Risk and Uncertainty 42 (1): 61–83. https://doi.org/10.1007/s11166-010-9110-0 .

Farber, Henry S. 2008. Reference-Dependent Preferences and Labor Supply: The Case of New York City Taxi Drivers. American Economic Review 98 (3): 1069–1082. https://doi.org/10.1257/aer.98.3.1069 .

Hacking, Ian. 1980. Strange Expectations. Philosophy of Science 47 (4): 562–567. https://doi.org/10.1086/288956 .

Holt, Charles A., and Susan K. Laury. 2002. Risk aversion and incentive effects. The American Economic Review 92 (5): 1644–1655. https://doi.org/10.1257/000282802762024700 .

Holt, Charles A., and Susan K. Laury. 2005. Risk aversion and incentive effects: New data without order effects. The American Economic Review 95 (3): 902–904.

Kahneman, Daniel, and Amos Tversky. 1979. Prospect Theory: An Analysis of Decision under Risk. Econometrica 47 (2): 263–292. https://doi.org/10.2307/1914185 .

Kahneman, Daniel, and Amos Tversky. 1984. Choices, values, and frames. American Psychologist 39 (4): 341–350. https://doi.org/10.1037/0003-066X.39.4.341 .

Kőszegi, Botond, and Matthew Rabin. 2007. Reference-Dependent Risk Attitudes. American Economic Review 97 (4): 1047–1073. https://doi.org/10.1257/aer.97.4.1047 .

Lichters, Marcel, Paul Bengart, Marko Sarstedt, and Bodo Vogt. 2017. What really Matters in Attraction Effect Research: When Choices Have Economic Consequences. Marketing Letters 28 (1): 127–138. https://doi.org/10.1007/s11002-015-9394-6 .

Lichters, Marcel, Claudia Brunnlieb, Gideon Nave, Marko Sarstedt, and Bodo Vogt. 2016. The Influence of Serotonin Deficiency on Choice Deferral and the Compromise Effect. Journal of Marketing Research 53 (2): 183–198. https://doi.org/10.1509/jmr.14.0482 .

Lichters, Marcel, Marko Sarstedt, and Bodo Vogt. 2015. On the practical Relevance of the Attraction Effect: A cautionary Note and Guidelines for Context Effect Experiments. AMS Review 5 (1–2): 1–19. https://doi.org/10.1007/s13162-015-0066-8 .

Mishra, Sandeep. 2014. Decision-Making Under Risk: Integrating Perspectives From Biology, Economics, and Psychology. Personality and Social Psychology Review 18 (3): 280–307. https://doi.org/10.1177/1088868314530517 .

Mookherjee, Dilip, and Ivan Png. 1989. Optimal Auditing, Insurance, and Redistribution. The Quarterly Journal of Economics 104 (2): 399–415. https://doi.org/10.2307/2937855 .

Neugebauer, Tibor. 2010. Moral Impossibility in the St Petersburg Paradox: A Literature Survey and Experimental Evidence. Luxembourg School of Finance Research Working Paper Series 10 (174): 1–43.

Pratt, John W. 1964. Risk Aversion in the Small and in the Large. Econometrica 32 (1–2): 122–136.

Quiggin, John. 1993. Generalized Expected Utility Theory: The Rank-Dependent Model . Boston: Kluwer Academic Publishers.

Book   Google Scholar  

Real, Leslie A. 1980. Fitness, Uncertainty, and the Role of Diversification in Evolution and Behavior. The American Naturalist 115 (5): 623–638. https://doi.org/10.1086/283588 .

Real, Leslie A. 1996. Paradox, Performance, and the Architecture of Decision-Making in Animals. American Zoologist 36 (4): 518–529. https://doi.org/10.1093/icb/36.4.518 .

Rieger, Marc O., and Mei Wang. 2006. Cumulative prospect theory and the St. Petersburg paradox. Economic Theory 28 (3): 665–679. https://doi.org/10.1007/s00199-005-0641-6 .

Rieger, Marc O., Mei Wang, and Thorsten Hens. 2015. Risk Preferences Around the World. Management Science 61 (3): 637–648. https://doi.org/10.1287/mnsc.2013.1869 .

Rosenboim, Mosi, and Tal Shavit. 2012. Whose money is it anyway? Using prepaid incentives in experimental economics to create a natural environment. Experimental Economics 15 (1): 145–157. https://doi.org/10.1007/s10683-011-9294-4 .

Rubinstein, Mark. 2002. Markowitz’s “Portfolio Selection”: A Fifty-Year Retrospective. The Journal of Finance 57 (3): 1041–1045. https://doi.org/10.1111/1540-6261.00453 .

Samuelson, Paul A. 1960. The St. Petersburg paradox as a divergent double limit. International Economic Review 1 (1): 31–37.

Samuelson, Paul A. 1977. St. Petersburg paradoxes: Defanged, dissected, and historically described. Journal of Economic Literature 15 (1): 24–55.

Scholer, Abigail A., Xi Zou, Kentaro Fujita, Steven J. Stroessner, and E.T. Higgins. 2010. When risk seeking becomes a motivational necessity. Journal of Personality and Social Psychology 99 (2): 215–231. https://doi.org/10.1037/a0019715 .

Seidl, Christian. 2013. The St. Petersburg Paradox at 300. Journal of Risk and Uncertainty 46 (3): 247–264. https://doi.org/10.1007/s11166-013-9165-9 .

Shapley, Lloyd S. 1977. The St. Petersburg paradox: A con games? Journal of Economic Theory 14 (2): 439–442. https://doi.org/10.1016/0022-0531(77)90142-9 .

Shrader-Frechette, Kristin. 1985. Technological risk and small probabilities. Journal of Business Ethics 4 (6): 431–445. https://doi.org/10.1007/BF00382604 .

Székely, Gábor J., and Donald S.P. Richards. 2004. The St. Petersburg Paradox and the Crash of High-Tech Stocks in 2000. The American Statistician 58 (3): 225–231. https://doi.org/10.1198/000313004x1440 .

Takayasu, Hideki. 2010. How to Avoid Fragility of Financial Systems: Lessons from the Financial Crisis and St. Petersburg Paradox. In Econophysics Approaches to Large - Scale Business Data and Financial Crisis: Proceedings of Tokyo Tech - Hitotsubashi Interdisciplinary Conference  +  APFA7 , Misako Takayasu, Tsutomu Watanabe, and Hideki Takayasu, eds. 1. Aufl., 197–207. Berlin: Springer Japan.

Thaler, Richard H., and Eric J. Johnson. 1990. Gambling with the house money and trying to break even: The effects of prior outcomes on risky choice. Management Science 36 (6): 643–660. https://doi.org/10.1287/mnsc.36.6.643 .

Tversky, Amos, and Daniel Kahneman. 1992. Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty 5 (4): 297–323. https://doi.org/10.1007/BF00122574 .

Wang, Charles X., Scott Webster, and Nallan C. Suresh. 2009. Would a risk-averse newsvendor order less at a higher selling price? European Journal of Operational Research 196 (2): 544–553. https://doi.org/10.1016/j.ejor.2008.04.002 .

Weirich, Paul. 1984. The St. Petersburg gamble and risk. Theory and Decision 17 (2): 193–202. https://doi.org/10.1007/bf00160983 .

Yaari, Menahem E. 1987. The Dual Theory of choice under risk. Econometrica 55 (1): 95–115.

Download references

Author information

Authors and affiliations.

Andrew Young School of Policy Studies, Georgia State University, 14 Marietta Street, Atlanta, GA, 30303, USA

James C. Cox & Vjollca Sadiraj

Otto-von-Guericke-University Magdeburg, Empirical Economics, Universitätsplatz 2, 39106, Magdeburg, Germany

Eike B. Kroll, Marcel Lichters & Bodo Vogt

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Bodo Vogt .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: St. Petersburg game and loss aversion

Let the utility over outcomes be given as

where \(\lambda \ge 1.\) For the purpose of this paper we assume the functional is linear in probabilities.

Result 1 : If \(\lambda \ge 1\) then for all finite n acceptance of participation in one game implies rejection in its sign-mirror game.

Let S n and P n denote participation in the finite St. Petersburg game with negative and positive prizes that offer up to n coin flips. The value of participating in the n -finite version of the St. Petersburg game, P n with positive prizes is

whereas the value of participating in n -finite version of the St. Petersburg game, S n with negative prizes is

It can be verified that

For \(\lambda \ge 1\) , the left hand side of the last equation is non-positive and therefore acceptance of S n requires rejection of P n ; similarly acceptance of P n requires rejection of S n (Fig. 3 , Table 5 ).

figure 3

S 3 s -order stochastic dominant over S 4 . On the left: cumulative distributions of S 3 (red) and S 4 (blue); on the right: \(\int\limits_{ - 4}^{x} {[S_{3} (z) - } S_{4} (z)]\text{d}x\)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Cox, J.C., Kroll, E.B., Lichters, M. et al. The St. Petersburg paradox despite risk-seeking preferences: an experimental study. Bus Res 12 , 27–44 (2019). https://doi.org/10.1007/s40685-018-0078-y

Download citation

Received : 01 March 2018

Accepted : 09 October 2018

Published : 26 October 2018

Issue Date : April 2019

DOI : https://doi.org/10.1007/s40685-018-0078-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Decision-making under risk
  • St. Petersburg paradox
  • Risk-aversion

JEL Classification

  • Find a journal
  • Publish with us
  • Track your research

Quantum Experiment Could Finally Reveal The Elusive Gravity Particle

A white cylinder shaped object on a shimmering yellow background

The graviton – a hypothetical particle that carries the force of gravity – has eluded detection for over a century. But now physicists have designed an experimental setup that could in theory detect these tiny quantum objects.

In the same way individual particles called photons are force carriers for the electromagnetic field, gravitational fields could theoretically have its own force-carrying particles called gravitons.

The problem is, they interact so weakly that they've never been detected, and some physicists believe they never will.

But a new study , led by Stockholm University, is more optimistic. The team has described an experiment that could measure what they call the "gravito-phononic effect" and capture individual gravitons for the first time.

The experiment would involve cooling a massive, 1,800 kilogram (nearly 4,000 pound) bar of aluminum to a hair above absolute zero, hooking it up to continuous quantum sensors, and waiting patiently for gravitational waves to wash over it. When one does, the instrument would vibrate at very tiny scales, which the sensors could see as a series of discrete steps between energy levels.

Each of those steps (or quantum jumps) would mark the detection of a single graviton.

Any potential signal could then be cross-checked against data from the LIGO facility to ensure it's from a gravitational wave event and not background interference.

It's a surprisingly elegant experiment, but there is one catch: those sensitive quantum sensors don't actually exist yet. That said, the team believes that building them should be possible in the near future.

"We're certain this experiment would work," says theoretical physicist Thomas Beitel , an author of the study. "Now that we know that gravitons can be detected, it's added motivation to further develop the appropriate quantum-sensing technology. With some luck, one will be able to capture single gravitons soon."

Of the four fundamental forces of physics , gravity is the one we're most familiar with on a daily basis, but in many ways it remains the most mysterious. Electromagnetism has the photon, the weak interaction has W and Z bosons , and the strong interaction has the gluon, so according to some models gravity should have the graviton. Without it, it's a lot harder to make gravity work with the Standard Model of quantum theory.

This new experiment could help, ironically by returning to some of the earliest experiments in the field. Starting in the 1960s, physicist Joseph Weber tried to find gravitational waves using solid aluminum cylinders, which were suspended from steel wire to isolate them from background noise. If gravitational waves swept past, the idea goes, it would set off vibrations in the cylinders that would be converted into measurable electrical signals.

With this setup, Weber insisted he detected gravitational waves as early as 1969, but his results couldn't be replicated and his methods were later discredited. The phenomenon would remain undetected until LIGO found them in 2015 .

Weber wasn't specifically looking for gravitons, but it might be possible with a 21st century upgrade to his experiment. Cryogenic cooling, along with protection from noise and other vibration sources, keeps the aluminum atoms as still as possible, so potential signals are clearer. And having a confirmed gravitational wave detector on-hand is helpful too.

"The LIGO observatories are very good at detecting gravitational waves, but they cannot catch single gravitons," says Beitel . "But we can use their data to cross-correlate with our proposed detector to isolate single gravitons."

The researchers say the most promising candidates are gravitational waves from collisions between pairs of neutron stars , within LIGO's detection range. With each event, an estimated one undecillion gravitons (that's a 1 followed by 36 zeroes) would pass through the aluminum, but only a handful would be absorbed.

The last puzzle piece is those pesky quantum sensors. Thankfully, the team believes that technology isn't too far out of reach.

"Quantum jumps have been observed in materials recently, but not yet at the masses we need," says Stockholm University physicist Germain Tobar, an author of the study. "But technology advances very rapidly, and we have more ideas on how to make it easier."

The research was published in the journal Nature Communications .

Score Card Research NoScript

  • Work & Careers
  • Life & Arts

The Irish schools experiment: a year for students to find themselves

Limited time offer, save 50% on standard digital, explore more offers..

Then $75 per month. Complete digital access to quality FT journalism. Cancel anytime during your trial.

Premium Digital

Complete digital access to quality FT journalism with expert analysis from industry leaders. Pay a year upfront and save 20%.

  • Global news & analysis
  • Expert opinion
  • FT App on Android & iOS
  • FT Edit app
  • FirstFT: the day's biggest stories
  • 20+ curated newsletters
  • Follow topics & set alerts with myFT
  • FT Videos & Podcasts
  • 20 monthly gift articles to share
  • Lex: FT's flagship investment column
  • 15+ Premium newsletters by leading experts
  • FT Digital Edition: our digitised print edition

FT Digital Edition

10% off your first year. The new FT Digital Edition: today’s FT, cover to cover on any device. This subscription does not include access to ft.com or the FT App.

Terms & Conditions apply

Explore our full range of subscriptions.

Why the ft.

See why over a million readers pay to read the Financial Times.

What Is the St. Petersburg Paradox?

  • Applications Of Statistics
  • Statistics Tutorials
  • Probability & Games
  • Descriptive Statistics
  • Inferential Statistics
  • Math Tutorials
  • Pre Algebra & Algebra
  • Exponential Decay
  • Worksheets By Grade
  • Ph.D., Mathematics, Purdue University
  • M.S., Mathematics, Purdue University
  • B.A., Mathematics, Physics, and Chemistry, Anderson University

You’re on the streets of St. Petersburg, Russia, and an old man proposes the following game. He flips a coin (and will borrow one of yours if you don’t trust that his is a fair one). If it lands tails up then you lose and the game is over. If the coin lands heads up then you win one ruble and the game continues. The coin is tossed again. If it is tails, then the game ends. If it is heads, then you win an additional two rubles. The game continues in this fashion. For each successive head we double our winnings from the previous round, but at the sign of the first tail, the game is done.

How much would you pay to play this game? When we consider the expected value of this game, you should jump at the chance, no matter what the cost is to play. However, from the description above, you probably wouldn’t be willing to pay much. After all, there is a 50% probability of winning nothing. This is what is known as the St. Petersburg Paradox, named due to the 1738 publication of Daniel Bernoulli Commentaries of the Imperial Academy of Science of Saint Petersburg .

Some Probabilities

Let's begin by calculating probabilities associated with this game. The probability that a fair coin lands heads up is 1/2. Each coin toss is an independent event and so we multiply probabilities possibly with the use of a tree diagram .

  • The probability of two heads in a row is (1/2)) x (1/2) = 1/4.
  • The probability of three heads in a row is (1/2) x (1/2) x (1/2) = 1/8.
  • To express the probability of n heads in a row, where n is a positive whole number we use exponents to write 1/2 n .

Some Payouts

Now let's move on and see if we can generalize what the winnings would be in each round.

  • If you have a head in the first round you win one ruble for that round.
  • If there is a head in the second round you win two rubles in that round.
  • If there is a head in the third round, then you win four rubles in that round.
  • If you have been lucky enough to make it all the way to the n th round, then you will win 2 n-1 rubles in that round.

Expected Value of the Game

The expected value of a game tells us what the winnings would average out to be if you played the game many, many times. To calculate the expected value, we multiply the value of the winnings from each round with the probability of getting to this round, and then add all of these products together.

  • From the first round, you have probability 1/2 and winnings of 1 ruble: 1/2 x 1 = 1/2
  • From the second round, you have probability 1/4 and winnings of 2 rubles: 1/4 x 2 = 1/2
  • From the first round, you have probability 1/8 and winnings of 4 rubles: 1/8 x 4 = 1/2
  • From the first round, you have probability 1/16 and winnings of 8 rubles: 1/16 x 8 = 1/2
  • From the first round, you have probability 1/2 n and winnings of 2 n-1 rubles: 1/2 n x 2 n-1 = 1/2

The value from each round is 1/2, and adding the results from the first n rounds together gives us an expected value of n /2 rubles. Since n can be any positive whole number, the expected value is limitless.

The Paradox

So what should you pay to play? A ruble, a thousand rubles or even a billion rubles would all, in the long run, be less than the expected value. Despite the above calculation promising untold riches, we would all still be reluctant to pay very much to play.

There are numerous ways to resolve the paradox. One of the simpler ways is that no one would offer a game such as the one described above. No one has the infinite resources that it would take to pay someone who continued to flip heads.

Another way to resolve the paradox involves pointing out how improbable it is to get something like 20 heads in a row. The odds of this happening are better than winning most state lotteries. People routinely play such lotteries for five dollars or less. So the price to play the St. Petersburg game should probably not exceed a few dollars.

If the man in St. Petersburg says that it will cost anything more than a few rubles to play his game, you should politely refuse and walk away. Rubles aren’t worth much anyway.

  • What Is a Double Blind Experiment?
  • What Is Bootstrapping in Statistics?
  • The Probability of Being Dealt a Royal Flush in Poker
  • How Are the Statistics of Political Polls Interpreted?
  • Example of Bootstrapping
  • Groundhog Day Statistics
  • An Introduction to Queuing Theory
  • Leap Day Statistics
  • Millions, Billions, and Trillions
  • Sten Scores and Their Use in Rescaling Test Scores
  • Stanine Score Example
  • Statistics Related to Father's Day
  • The Formula for Expected Value
  • Overview of Simpson's Paradox in Statistics
  • What Is Ecological Correlation?
  • Probabilities in the Game Monopoly

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

The St. Petersburg Paradox

The St. Petersburg paradox was introduced by Nicolaus Bernoulli in 1713. It continues to be a reliable source for new puzzles and insights in decision theory.

The standard version of the St. Petersburg paradox is derived from the St. Petersburg game, which is played as follows: A fair coin is flipped until it comes up heads the first time. At that point the player wins \(\$2^n,\) where n is the number of times the coin was flipped. How much should one be willing to pay for playing this game? Decision theorists advise us to apply the principle of maximizing expected value. According to this principle, the value of an uncertain prospect is the sum total obtained by multiplying the value of each possible outcome with its probability and then adding up all the terms (see the entry on normative theories of rational choice: expected utility ). In the St. Petersburg game the monetary values of the outcomes and their probabilities are easy to determine. If the coin lands heads on the first flip you win $2, if it lands heads on the second flip you win $4, and if this happens on the third flip you win $8, and so on. The probabilities of the outcomes are \(\frac{1}{2}\), \(\frac{1}{4}\), \(\frac{1}{8}\),…. Therefore, the expected monetary value of the St. Petersburg game is

(Some would say that the sum approaches infinity, not that it is infinite. We will discuss this distinction in Section 2 .)

The “paradox” consists in the fact that our best theory of rational choice seems to entail that it would be rational to pay any finite fee for a single opportunity to play the St. Petersburg game, even though it is almost certain that the player will win a very modest amount. The probability is \(\frac{1}{2}\) that the player wins no more than $2, and \(\frac{3}{4}\) that he or she wins no more than $4.

In a strict logical sense, the St. Petersburg paradox is not a paradox because no formal contradiction is derived. However, to claim that a rational agent should pay millions, or even billions, for playing this game seems absurd. So it seems that we, at the very least, have a counterexample to the principle of maximizing expected value. If rationality forces us to liquidate all our assets for a single opportunity to play the St. Petersburg game, then it seems unappealing to be rational.

1. The History of the St. Petersburg Paradox

2. the modern st. petersburg paradox, 3. unrealistic assumptions, 4. a bounded utility function, 5. ignore small probabilities, 6. relative expected utility theory, 7. the pasadena game, other internet resources, related entries.

The St. Petersburg paradox is named after one of the leading scientific journals of the eighteenth century, Commentarii Academiae Scientiarum Imperialis Petropolitanae [ Papers of the Imperial Academy of Sciences in Petersburg ], in which Daniel Bernoulli (1700–1782) published a paper entitled “Specimen Theoriae Novae de Mensura Sortis” [“Exposition of a New Theory on the Measurement of Risk”] in 1738. Daniel Bernoulli had learned about the problem from his cousin Nicolaus I (1687–1759), who proposed an early but unnecessarily complex version of the paradox in a letter to Pierre Rémond de Montmort on 9 September 1713 (for this and related letters see J. Bernoulli 1975). Nicolaus asked de Montmort to imagine an example in which an ordinary dice is rolled until a 6 comes up:

[W]hat is the expectation of B … if A promises to B to give him some coins in this progression 1, 2, 4, 8, 16 etc. or 1, 3, 9, 27 etc. or 1, 4, 9, 16, 25 etc. or 1, 8, 27, 64 instead of 1, 2, 3, 4, 5 etc. as beforehand. Although for the most part these problems are not difficult, you will find however something most curious. (N. Bernoulli to Montmort, 9 September 1713)

It seems that Montmort did not immediately get Nicolaus’ point. Montmort responded that these problems

have no difficulty, the only concern is to find the sum of the series of which the numerators being in the progression of squares, cubes, etc. the denominators are in geometric progression. (Montmort to N. Bernoulli, 15 November 1713)

However, he never performed any calculations. If he had, he would have discovered that the expected value of the first series (1, 2, 4, 8, 16, etc.) is:

For this series it holds that

so by applying the ratio test it is easy to verify that the series is divergent. (This test was discovered by d’Alembert in 1768, so it might be unfair to criticize Montmort for not seeing this.) However, the mathematical argument presented by Nicolaus himself was also a bit sketchy and would not impress contemporary mathematicians. The good news is that his conclusion was correct:

it would follow thence that B must give to A an infinite sum and even more than infinity (if it is permitted to speak thus) in order that he be able to make the advantage to give him some coins in this progression 1, 2, 4, 8, 16 etc. (N. Bernoulli to Montmort, 20 February 1714)

The next important contribution to the debate was made by Cramér in 1728. He read about Nicolaus’ original problem in a book published by Montmort and proposed a simpler and more elegant formulation in a letter to Nicolaus:

In order to render the case more simple I will suppose that A throw in the air a piece of money, B undertakes to give him a coin, if the side of Heads falls on the first toss, 2, if it is only the second, 4, if it is the 3rd toss, 8, if it is the 4th toss, etc. The paradox consists in this that the calculation gives for the equivalent that A must give to B an infinite sum, which would seem absurd. (Cramér to N. Bernoulli, 21 May 1728)

In the very same letter, Cramér proposed a solution that revolutionized the emerging field of decision theory. Cramér pointed out that it is not the expected monetary value that should guide the choices of a rational agent, but rather the “usage” that “men of good sense” can make of money. According to Cramér, twenty million is not worth more than ten million, because ten million is enough for satisfying all desires an agent may reasonably have:

mathematicians value money in proportion to its quantity, and men of good sense in proportion to the usage that they may make of it. That which renders the mathematical expectation infinite, is the prodigious sum that I am able to receive, if the side of Heads falls only very late, the 100th or 1000th toss. Now this sum, if I reason as a sensible man, is not more for me, does not make more pleasure for me, does not engage me more to accept the game, than if it would be only 10 or 20 million coins. (21 May 1728)

The point made by Cramér in this passage can be generalized. Suppose that the upper boundary of an outcome’s value is \(2^m.\) If so, that outcome will be obtained if the coin lands heads on the m th flip. This means that the expected value of all the infinitely many possible outcomes in which the coin is flipped more than m times will be finite: It is \(2^m\) times the probability that this happens, so it cannot exceed \(2^m\). To this we have to add the aggregated value of the first m possible outcomes, which is obviously finite. Because the sum of any two finite numbers is finite, the expected value of Cramér’s version of the St. Petersburg game is finite.

Cramér was aware that it would be controversial to claim that there exists an upper boundary beyond which additional riches do not matter at all . However, he pointed out that his solution works even if he the value of money is strictly increasing but the relative increase gets smaller and smaller (21 May 1728):

If one wishes to suppose that the moral value of goods was as the square root of the mathematical quantities … my moral expectation will be \[ \frac{1}{2} \cdot \sqrt{1} + \frac{1}{4} \cdot \sqrt{2} + \frac{1}{8} \cdot \sqrt{4} + \frac{1}{16} \cdot \sqrt{8} \ldots \]

This is the first clear statement of what contemporary decision theorists and economists refer to as decreasing marginal utility: The additional utility of more money is never zero, but the richer you are, the less you gain by increasing your wealth further. Cramér correctly calculated the expected utility (“moral value”) of the St. Petersburg game to be about 2.9 units for an agent whose utility of money is given by the root function.

Daniel Bernoulli proposed a very similar idea in his famous 1738 article mentioned at the beginning of this section. Daniel argued that the agent’s utility of wealth equals the logarithm of the monetary amount, which entails that improbable but large monetary prizes will contribute less to the expected utility of the game than more probable but smaller monetary amounts. As his article was about to be published, Daniel’s brother Nicolaus mentioned to him that Cramér had proposed a very similar idea in 1728 (in the letter quoted above). In the final version of the text, Daniel openly acknowledged this:

Indeed I have found [Cramér’s] theory so similar to mine that it seems miraculous that we independently reached such close agreement on this sort of subject. (Daniel Bernoulli 1738 [1954: 33])

Cramér’s remark about the agent’s decreasing marginal utility of money solves the original version of the St. Petersburg paradox. However, modern decision theorists agree that this solution is too narrow. The paradox can be restored by increasing the values of the outcomes up to the point at which the agent is fully compensated for her decreasing marginal utility of money (see Menger 1934 [1979]). The version of the St. Petersburg paradox discussed in the modern literature can thus be formulated as follows:

A fair coin is flipped until it comes up heads. At that point the player wins a prize worth \(2^n\) units of utility on the player’s personal utility scale, where n is the number of times the coin was flipped.

Note that the expected utility of this gamble is infinite even if the agent’s marginal utility of money is decreasing. We can leave it open exactly what the prizes consists of. It need not be money.

It is worth stressing that none of the prizes in the St. Petersburg game have infinite value. No matter how many times the coin is flipped, the player will always win some finite amount of utility . The expected utility of the St. Petersburg game is not finite, but the actual outcome will always be finite. It would thus be a mistake to dismiss the paradox by arguing that no actual prizes can have infinite utility. No actual infinities are required for constructing the paradox, only potential ones. (For a discussion of the distinction between actual and potential infinities, see Linnebo and Shapiro 2019.) In discussions of the St. Petersburg paradox it is often helpful to interpret the term “infinite utility” as “not finite” but leave it to philosophers of mathematics to determine whether it is or merely approaches infinity.

Some authors have discussed exactly what is problematic with the claim that the expected utility of the modified St. Petersburg game is infinite (read: not finite). Is it merely the fact that the fair price of the wager is “too high”, or is there something else that prompts the worry? James M. Joyce notes that

a wager of infinite utility will be strictly preferred to any of its payoffs since the latter are all finite. This is absurd given that we are confining our attention to bettors who value wagers only as means to the end of increasing their fortune. (Joyce 1999: 37)

Joyce’s point seems to be that an agent who pays the fair price of the wager will know for sure that she will actually be worse off after she has paid the fee. However, this seems to presuppose that actual infinities do exist. If only potential infinities exist, then the player cannot “pay” an infinite fee for playing the game. If so, we could perhaps interpret Joyce as reminding us that no matter what finite amount the player actually wins, the expected utility will always be higher, meaning that it would have been rational to pay even more. Russell and Isaacs (2021:179) offer a slightly different analysis. Their point is that “however much the St. Petersburg gamble is worth, no particular outcome could be worth exactly that much”. This is because the St. Petersburg gamble is worth more than any finite outcome, but less than something worth infinitely much.

Is the St. Petersburg gamble perhaps worth something like an infinite amount of money? No. The St. Petersburg gamble is sure to pay only a finite amount of money. Suppose there is something which is worth more than each finite amount of money–such as an infinite amount of money (whatever that might come to), or a priceless artwork, or true love. If [the agent] has something like that, then the prospect of keeping it (with certainty) will dominate giving it up in exchange for the St. Petersburg gamble; thus the St. Petersburg gamble is not worth so much. Of course, nothing is worth more than each finite amount of money, yet not worth more than every finite amount of money. So the conclusion of [the agent’s] reasoning is that nothing she could bid–monetary or otherwise–would be the right price. (Russell and Isaacs 2021:179)

Decisions theorists wish to clarify a means-ends notion of rationality, according to which it is rational to do whatever is the best means to one’s end. The player thus knows that paying more than what one actually wins cannot be the best means to the end of maximizing utility. But always being forced to pay too little is also problematic, because then the seller would “pay” too much (that is, receive too little). So at least one agent will be irrational and pay too much unless we can establish a fair price of the gamble. This observation enables us to strengthen the original “paradox” (in which no formal contradiction is derived) into a stronger version consisting of three incompatible claims:

  • The amount of utility it is rational to pay for playing (or selling the right to play) the St. Petersburg game is higher than every finite amount of utility.
  • The buyer knows that the actual amount of utility he or she will actually receive is finite.
  • It is not rational to knowingly pay more for something than one will receive.

Many discussions of the St. Petersburg paradox have focused on (1) . As we will see in the next couple of sections, many scholars argue that the value of the St. Petersburg game is, for one reason or another, finite. A rare exception is Hájek and Nover. They offer the following argument for accepting (1) :

The St Petersburg game can be regarded as the limit of a sequence of truncated St Petersburg games, with successively higher finite truncation points—for example, the game is called off if heads is not reached by the tenth toss; by the eleventh toss; by the twelveth toss;…. If we accept dominance reasoning, these successive truncations can guide our assessment of the St Petersburg game’s value: it is bounded below by each of their values, these bounds monotonically increasing. Thus we have a principled reason for accepting that it is worth paying any finite amount to play the St Petersburg game. (Hájek and Nover 2006: 706)

Although they do not explicitly say so, Hájek and Nover would probably reject (3) . The least controversial claim is perhaps (2). It is, of course, logically possible that the coin keeps landing tails every time it is flipped, even though an infinite sequence of tails has probability 0. (For a discussion of this possibility, see Williamson 2007.) Some events that have probability 0 do actually occur, and in uncountable probability spaces it is impossible that all outcomes have a probability greater than 0. Even so, if the coin keeps landing tails every time it is flipped, the agent wins 0 units of utility. So (2) would still hold true.

Some authors claim that the St. Petersburg game should be dismissed because it rests on assumptions that can never be fulfilled. For instance, Jeffrey (1983: 154) argues that “anyone who offers to let the agent play the St. Petersburg gamble is a liar, for he is pretending to have an indefinitely large bank”. Similar objections were raised in the eighteenth century by Buffon and Fontaine (see Dutka 1988).

However, it is not clear why Jeffrey’s point about real-world constraints would be relevant. What is wrong with evaluating a highly idealized game we have little reason to believe we will ever get to play? Hájek and Smithson (2012) point out that the St Petersburg paradox is contagious in the following sense: As long as you assign some nonzero probability to the hypothesis that the bank’s promise is credible, the expected utility will be infinite no matter how low your credence in the hypothesis is. Any nonzero probability times infinity equals infinity, so any option in which you get to play the St. Petersburg game with a nonzero probability has infinite expected utility.

It is also worth keeping in mind that the St. Petersburg game may not be as unrealistic as Jeffrey claims. The fact that the bank does not have an indefinite amount of money (or other assets) available before the coin is flipped should not be a problem. All that matters is that the bank can make a credible promise to the player that the correct amount will be made available within a reasonable period of time after the flipping has been completed. How much money the bank has in the vault when the player plays the game is irrelevant. This is important because, as noted in section 2, the amount the player actually wins will always be finite. We can thus imagine that the game works as follows: We first flip the coin, and once we know what finite amount the bank owes the player, the CEO will see to it that the bank raises enough money.

If this does not convince the player, we can imagine that the central bank issues a blank check in which the player gets to fill in the correct amount once the coin has been flipped. Because the check is issued by the central bank it cannot bounce. New money is automatically created as checks issued by the central bank are introduced in the economy. Jeffrey dismisses this version of the St. Petersburg game with the following argument:

[Imagine that] Treasury department delivers to the winner a crisp new billion billion dollar bill. Due to the resulting inflation, the marginal desirabilities of such high payoffs would presumably be low enough to make the prospect of playing the game have finite expected [utility]. (Jeffrey 1983: 155)

Jeffrey is probably right that “a crisp new billion billion dollar bill” would trigger some inflation, but this seems to be something we could take into account as we construct the game. All that matters is that the utilities in the payoff scheme are linear.

Readers who feel unconvinced by this argument may wish to imagine a version of the St. Petersburg game in which the player is hooked up to Nozick’s Experience Machine (see section 2.3 in the entry on hedonism ). By construction, this machine can produce any pleasurable experience the agent wishes. So once the coin has been flipped n times, the Experience Machine will generate a pleasurable experience worth \(2^n\) units of utility on the player’s personal utility scale. Aumann (1977) notes without explicitly mention the Experience Machine that:

The payoffs need not be expressible in terms of a fixed finite number of commodities, or in terms of commodities at all […] the lottery ticket […] might be some kind of open-ended activity -- one that could lead to sensations that he has not heretofore experienced. Examples might be religious, aesthetic, or emotional experiences, like entering a monastery, climbing a mountain, or engaging in research with possibly spectacular results. (Aumann 1977: 444)

A possible example of the type of experience that Aumann has in mind could be the number of days spent in Heaven. It is not clear why time spent in Heaven must have diminishing marginal utility.

Another type of practical worry concerns the temporal dimension of the St. Petersburg game. Brito (1975) claims that the coin flipping may simply take too long time. If each flip takes n seconds, we must make sure it would be possible to flip it sufficiently many times before the player dies. Obviously, if there exists an upper limit to how many times the coin can be flipped the expected utility would be finite too.

A straightforward response to this worry is to imagine that the flipping took place yesterday and was recorded on video. The first flip occurred at 11 p.m. sharp, the second flip \(\frac{60}{2}\) minutes later, the third \(\frac{60}{4}\) minutes after the second, and so on. The video has not yet been made available to anyone, but as soon as the player has paid the fee for playing the game the video will be placed in the public domain. Note that the coin could in principle have been flipped infinitely many times within a single hour. (This is an example of a “supertask”; see the entry on supertasks .)

It is true that this random experiment requires the coin to be flipped faster and faster. At some point we would have to spin the coin faster than the speed of light. This is not logically impossible although this assumption violates a contingent law of nature. If you find this problematic, we can instead imagine that someone throws a dart on the real line between 0 and 1. The probability that the dart hits the first half of the interval, \(\left[0, \frac{1}{2}\right),\) is \(\frac{1}{2}.\) And the probability that the dart hits the next quarter, \(\left[\frac{1}{2}, \frac{3}{4}\right),\) is \(\frac{1}{4}\), and so on. If “coin flips” are generated in this manner the random experiment will be over in no time at all. To steer clear of the worry that no real-world dart is infinitely sharp we can define the point at which the dart hits the real line as follows: Let a be the area of the dart. The point at which the dart hits the interval [0,1] is defined such that half of the area of a is to the right of some vertical line through a and the other half to the left the vertical line. The point at which the vertical line crosses the interval [0,1] is the outcome of the random experiment.

In the contemporary literature on the St. Petersburg paradox practical worries are often ignored, either because it is possible to imagine scenarios in which they do not arise, or because highly idealized decision problems with unbounded utilities and infinite state spaces are deemed to be interesting in their own right.

Arrow (1970: 92) suggests that the utility function of a rational agent should be “taken to be a bounded function.… since such an assumption is needed to avoid [the St. Petersburg] paradox”. Basset (1987) makes a similar point; see also Samuelson (1977) and McClennen (1994).

Arrow’s point is that utilities must be bounded to avoid the St. Petersburg paradox and that traditional axiomatic accounts of the expected utility principle guarantee this to be the case. The well-known axiomatizations proposed by Ramsey (1926), von Neumann and Morgenstern (1947), and Savage (1954) do, for instance, all entail that the decision maker’s utility function is bounded. (See section 2.3 in the entry on decision theory for an overview of von Neumann and Morgenstern’s axiomatization.)

If the utility function is bounded, then the expected utility of the St. Petersburg game will of course be finite. But why do the axioms of expected utility theory guarantee that the utility function is bounded? The crucial assumption is that rationally permissible preferences over lotteries are continuous . To explain the significance of this axiom it is helpful to introduce some symbols. Let \(\{pA, (1-p)B\}\) be the lottery that results in A with probability p and B with probability \(1-p\). The expression \(A\preceq B\) means that the agent considers B to be at least as good as A , i.e., weakly prefers B to A . Moreover, \(A\sim B\) means that A and B are equi-preferred, and \(A\prec B\) means that B is preferred to A . Consider:

  • The Continuity Axiom: Suppose \(A \preceq B\preceq C\). Then there is a probability \(p\in [0,1]\) such that \(\{pA, (1-p)C\}\sim B\).

To explain why this axiom entails that no object can have infinite value, suppose for reductio that A is a prize check worth $1, B is a check worth $2, and C is a prize to which the agent assigns infinite utility. The decision maker’s preference is \(A\prec B\prec C\), but there is no probability p such that \(\{pA, (1-p)C\sim B\). Whenever p is nonzero the decision maker will strictly prefer \(\{pA, (1-p)C\}\) to B , and if p is 0 the decision maker will strictly prefer B . So because no object (lottery or outcome) can have infinite value, and a utility function is defined by the utilities it assigns to those objects (lotteries or outcomes), the utility function has to be bounded.

Does this solve the St. Petersburg paradox? The answer depends on whether we think a rational agent offered to play the St. Petersburg game has any reason to accept the continuity axiom. A possible view is that anyone who is offered to play the St. Petersburg game has reason to reject the continuity axiom. Because the St. Petersburg game has infinite utility, the agent has no reason to evaluate lotteries in the manner stipulated by this axiom. As explained in Section 3, we can imagine unboundedly valuable payoffs.

Some might object that the continuity axiom, as well as the other axioms proposed by von Neumann and Morgenstern (and Ramsey and Savage), are essential for defining utility in a mathematically precise manner. It would therefore be meaningless to talk about utility if we reject the continuity axiom. This axiom is part of what it means to say that something has a higher utility than something else. A good response could be to develop a theory of utility in which preferences over lotteries are not used for defining the meaning of the concept; see Luce (1959) for an early example of such a theory. Another response could be to develop a theory of utility in which the continuity axiom is explicitly rejected; see Skala (1975).

Buffon argued in 1777 that a rational decision maker should disregard the possibility of winning lots of money in the St. Petersburg game because the probability of doing so is very low. According to Buffon, some sufficiently improbable outcomes are “morally impossible” and should therefore be ignored. From a technical point of view, this solution is very simple: The St. Petersburg paradox arises because the decision maker is willing to aggregate infinitely many extremely valuable but highly improbable outcomes, so if we restrict the set of “possible” outcomes by excluding sufficiently improbable ones the expected utility will, of course, be finite.

But why should small probabilities be ignored? And how do we draw the line between small probabilities that are beyond concern and others that are not? Dutka summarizes Buffon’s lengthy answer as follows:

To arrive at a suitable threshold value, [Buffon] notes that a fifty-six year old man, believing his health to be good, would disregard the probability that he would die within twenty-four hours, although mortality tables indicate that the odds against his dying in this period are only 10189 to 1. Buffon thus takes a probability of 1/10,000 or less for an event as a probability which may be disregarded. (Dutka 1988: 33)

Is this a convincing argument? According to Buffon, we ought to ignore some small probabilities because people like him (56-year-old males) do in fact ignore them. Buffon can thus be accused of attempting to derive an “ought” from an “is”. To avoid Hume’s no-ought-from-an-is objection, Buffon would have to add a premise to the effect that people’s everyday reactions to risk are always rational. But why should we accept such a premise?

Another objection is that if we ignore small probabilities, then we will sometimes have to ignore all possible outcomes of an event. Consider the following example: A regular deck of cards has 52 cards, so it can be arranged in exactly 52! different ways. The probability of any given arrangement is thus about 1 in \(8 \cdot 10^{67}\). This is a very small probability. (If one were to add six cards to the deck, then the number of possible orderings would exceed the number of atoms in the known, observable universe.) However, every time we shuffle a deck of cards, we know that exactly one of the possible outcomes will materialize, so why should we ignore all such very improbable outcomes?

Nicholas J. J. Smith (2014) defends a modern version of Buffon’s solution. He bases his argument on the following principle:

  • Rationally negligible probabilities (RNP): For any lottery featuring in any decision problem faced by any agent, there is an \(\epsilon > 0\) such that the agent need not consider outcomes of that lottery of probability less than \(\epsilon\) incoming to a fully rational decision. (Smith 2014: 472)

Smith points out that the order of the quantifiers in RNP is crucial. The claim is that for every lottery there exists some probability threshold \(\epsilon\) below which all probabilities should be ignored, but it would be a mistake to think that one and the same \(\epsilon\) is applicable to every lottery. This is important because otherwise we could argue that RNP allows us to combine thousands or millions of separate events with a probability of less than \(\epsilon.\) It would obviously make little sense to ignore, say, half a million one-in-a-million events. By keeping in mind that that the appropriate \(\epsilon\) may vary from case to case this worry can be dismissed.

Smith also points out that if we ignore probabilities less than \(\epsilon,\) then we have to increase some other probabilities to ensure that all probabilities sum up to one, as required by the probability axioms (see section 1 in the entry on interpretations of probability ). Smith proposes a principle for doing this in a systematic manner.

However, why should we accept RNP? What is the argument for accepting this controversial principle apart from the fact that it would solve the St. Petersburg paradox? Smith’s argument goes as follows:

Infinite precision cannot be required: rather, in any given context, there must be some finite tolerance—some positive threshold such that ignoring all outcomes whose probabilities lie below this threshold counts as satisfying the norm…. There is a norm of decision theory which says to ignore outcomes whose probability is zero. Because this norm mentions a specific probability value (zero), it is the kind of norm where it makes sense to impose a tolerance: zero plus or minus \(\epsilon\) (which becomes zero plus \(\epsilon,\) given that probabilities are all between 0 and 1)… the idea behind (RNP) is that in any actual context in which a decision is to be made, one never needs to be infinitely precise in this way—that it never matters. There is (for each decision problem, each lottery therein, and each agent) some threshold such that the agent would not be irrational if she simply ignored outcomes whose probabilities lie below that threshold. (Smith 2014: 472–474)

Suppose we accept the claim that infinite precision is not required in decision theory. This would entail, per Smith’s argument, that it is rationally permissible to ignore probabilities smaller than \(\epsilon\). However, to ensure that the decision maker never pays a fortune for playing the St. Petersburg game it seems that Smith would have to defend the stronger claim that decision makers are rationally required to ignore small probabilities, i.e., that it is not permissible to not ignore them. Decision makers who find themselves in agreement with Smith’s view run a risk of paying a very large amount for playing the St. Petersburg game without doing anything deemed to be irrational by RNP. This point is important because it is arguably more difficult to show that decision makers are rationally required to avoid “infinite precision” in decisions in which this is an attainable and fully realistic goal, such as the St. Petersburg game. For a critique of RNP and a discussion of some related issues, see Hájek (2014).

Another objection to RNP has been proposed by Yoaav Isaacs (2016). He shows that RNP together with an additional principle endorsed by Smith (Weak Consistency) entail that the decision maker will sometimes take arbitrarily much risk for arbitrarily little reward.

Lara Buchak (2013) proposes what is arguably a more elegant version of this solution. Her suggestion is that we should assign exponentially less weight to small probabilities as we calculate an option’s value. A possible weighting function r discussed by Buchak is \(r(p) = p^2.\) Her proposal is, thus, that if the probability is \(\frac{1}{8}\) that you win $8 in addition to what you already have, and your utility of money increases linearly, then instead of multiplying your gain in utility by \(\frac{1}{8},\) you should multiply it by \((\frac{1}{8})^2 =\frac{1}{64}.\) Moreover, if the probability is \(\frac{1}{16}\) that you win $16 in addition to what you already have, you should multiply your gain by \(\frac{1}{256},\) and so on. This means that small probabilities contribute very little to the risk-weighted expected utility.

Buchak’s proposal vaguely resembles the familiar idea that our marginal utility of money is decreasing. As stressed by Cramér and Daniel Bernoulli, more money is always better than less, but the utility gained from each extra dollar is decreasing. According to Buchak, the weight we should assign to an outcome’s probability is also nonlinear: Small probabilities matter less the smaller they are, and their relative importance decrease exponentially:

The intuition behind the diminishing marginal utility analysis of risk aversion was that adding money to an outcome is of less value the more money the outcome already contains. The intuition behind the present analysis of risk aversion is that adding probability to an outcome is of more value the more likely that outcome already is to obtain. (Buchak 2014: 1099.)

Buchak notes that this move does not by itself solve the St. Petersburg paradox. For reasons that are similar to those Menger (1934 [1979]) mentions in his comment on Bernoulli’s solution, the paradox can be reintroduced by adjusting the outcomes such that the sum increases linearly (for details, see Buchak 2013: 73–74). Buchak is, for this reason, also committed to RNP, i.e., the controversial assumption that there will be some probability so small that it does not make any difference to the overall value of the gamble.

Another worry is that because Buchak rejects the principle of maximizing expected utility and replaces it with the principle of risk-weighted maximizing expected utility, many of the stock objections decision theorists have raised against violations of the expected utility principle can be raised against her principle as well. For instance, if you accept the principle of risk-weighted maximizing expected utility, you have to reject the independence axiom. This entails that you can be exploited in some cleverly designed pragmatic argument. See Briggs (2015) for a discussion of some objections to Buchak’s theory.

In the Petrograd game introduced by Colyvan (2008) the player wins $1 more than in the St. Petersburg game regardless of how many times the coin is flipped. So instead of winning 2 utility units if the coin lands heads on the first toss, the player wins 3; and so on. See Table 1 .

Table 1
\(\frac{1}{2}\) \(\frac{1}{4}\) \(\frac{1}{8}\)
St. Petersburg 2 4 8
Petrograd \(2+1\) \(4+1\) \(8+1\)

It seems obvious that the Petrograd game is worth more than the St. Petersburg game. However, it is not easy to explain why. Both games have infinite expected utility, so the expected utility principle gives the wrong answer. It is not true that the Petrograd game is worth more than the St. Petersburg game because its expected utility is higher; the two games have exactly the same expected utility. This shows that the expected utility principle is not universally applicable to all risky choices, which is an interesting observation in its own right.

Is the Petrograd game worth more than the St. Petersburg game because the outcomes of the Petrograd game dominate those of the St. Petersburg game? In this context, dominance means that the player will always win $1 more regardless of which state of the world turns out to be the true state, that is, regardless of how many times the coin is flipped. The problem is that it is easy to imagine versions of the Petrograd game to which the dominance principle would not be applicable. Imagine, for instance, a version of the Petrograd game that is exactly like the one in Table 1 except that for some very improbable outcome (say, if the coin lands heads for the first time on the 100 th flip) the player wins 1 unit less than in the St. Petersburg game. This game, the Petrogradskij game, does not dominate the St. Petersburg game. However, since it is almost certain that the player will be better off by playing the Petrogradskij game a plausible decision theory should be able to explain why the Petrogradskij game is worth more than the St. Petersburg game.

Colyvan claims that we can solve this puzzle by introducing a new version of expected utility theory called Relative Expected Utility Theory (REUT). According to REUT we should calculate the difference in expected utility between the two options for each possible outcome. Formally, the relative expected utility (\(\reu\)) of act \(A_k\) over \(A_l\) is

According to Colyvan, it is rational to choose \(A_k\) over \(A_l\) if and only if \(\reu(A_k,A_l) \gt 0\).

Colyvan’s REUT neatly explains why the Petrograd game is worth more than the St. Petersburg game because the relative expected utility is 1. REUT also explains why the Petrogradskij game is worth more than the St. Petersburg game: the difference in expected utility is \(1 - (\frac{1}{2})^{100}\) which is > 0.

However, Peterson (2013) notes that REUT cannot explain why the Leningradskij game is worth more than the Leningrad game (see Table 2 ). The Leningradskij game is the version of the Petrograd game in which the player in addition to receiving a finite number of units of utility also gets to play the St. Petersburg game (SP) if the coin lands heads up in the second round. In the Leningrad game the player gets to play the St. Petersburg game (SP) if the coin lands heads up in the third round.

Table 2
\(\frac{1}{2}\) \(\frac{1}{4}\) \(\frac{1}{8}\) \(\frac{1}{16}\)
Leningrad 2 4 \(8+\textrm{SP}\) 16
Leningradskij 2 \(4+\textrm{SP}\) 8 16

It is obvious that the Leningradskij game is worth more than the Leningrad game because the probability that the player gets to play SP as a bonus (which has infinite expected utility) is higher. However, REUT cannot explain why. The difference in expected utility for the state that occurs with probability \(\frac{1}{4}\) in Table 2 is \(-\infty\) and it is \(+\infty\) for the state that occurs with probability \(\frac{1}{8}.\) Therefore, because \(p \cdot \infty = \infty\) for all positive probabilities \(p\), and “\(\infty - \infty\)” is undefined in standard analysis, REUT cannot be applied to these games.

Bartha (2007) and (2016) proposes a more complex version of relative expected utility theory. In Bartha’s theory, the utility of an outcome x is compared to the utility of some alternative outcome y and a basepoint z , which can be chosen arbitrarily as long as x and y are at least as preferred as z . The relative utility of x vis-à-vis y and the base-point z is then defined as the ratio between u ( x ) – u ( z ) and the denominator u ( y ) – u ( z ); the latter is a “measuring stick” to which u ( x ) – u ( z ) is compared. So if u ( x ) = 10, u ( y ) = 20 and u ( z )= 0, then the relative utility of x vis-à-vis y and the base-point z is U ( x , y ; z ) = 0.5.

Bartha’s suggestion is to ask the agent to compare the St. Petersburg game to a lottery between two other games. If, for instance, Petrograd + is the game in which the player always wins 2 units more than in the St. Petersburg game regardless of how many times the coin is tossed, then the player could compare the Petrograd game to a lottery between Petrograd + and the St. Petersburg game. By determining for what probabilities p a lottery in which one plays Petrograd + with probability p and the St. Petersburg game with probability \(1-p\) is better than playing the Petrograd game for sure one can establish a measure of the relative value of Petrograd as compared to Petrograd + or St. Petersburg. (For details, see Sect. 5 in Bartha 2016. See also Colyvan and Hájek’s 2016 discussion of Bartha’s theory.)

An odd feature of Bartha’s theory is that two lotteries can have the same relative utility even if one is strictly preferred to the other; see Bartha (2011: 34–35). This indicates that the relative utilities assigned to lotteries in Bartha’s theory are not always choice-guiding.

Let us also mention another, quite simple variation of the original St. Petersburg game, which is played as follows (see Peterson 2015: 87): A manipulated coin lands heads up with probability 0.4 and the player wins a prize worth \(2^n\) units of utility, where n is the number of times the coin was tossed. This game, the Moscow game, is more likely to yield a long sequence of flips and is therefore worth more than the St. Petersburg game, but the expected utility of both games is the same, because both games have infinite expected utility. It might be tempting to say that the Moscow game is more attractive because the Moscow game stochastically dominates the St. Petersburg game. (That one game stochastically dominates another game means that for every possible outcome, the first game has at least as high a probability of yielding a prize worth at least u units of utility as the second game; and for some u , the first game yields u with a higher probability than the second.) However, the stochastic dominance principle is inapplicable to games in which there is a small risk that the player wins a prize worth slightly less than in the other game. We can, for instance, imagine that if the coin lands heads on the 100 th flip the Moscow game pays one unit less than the St. Petersburg game; in this scenario neither game stochastically dominates the other. Despite this, it still seems reasonable to insist that the game that is almost certain to yield a better outcome (in the sense explained above) is worth more. The challenge is to explain why in a robust and non-arbitrary way.

The Pasadena paradox introduced by Nover and Hájek (2004) is inspired by the St. Petersburg game, but the pay-off schedule is different. As usual, a fair coin is flipped n times until it comes up heads for the first time. If n is odd the player wins \((2^n)/n\) units of utility; however, if n is even the player has to pay \((2^n)/n\) units. How much should one be willing to pay for playing this game?

If we sum up the terms in the temporal order in which the outcomes occur and calculate expected utility in the usual manner we find that the Pasadena game is worth:

\[\begin{align} \frac{1}{2}\cdot\frac{2}{1} - \frac{1}{4}\cdot\frac{4}{2} + \frac{1}{8}\cdot\frac{8}{3} &- \frac{1}{16}\cdot\frac{16}{4} + \frac{1}{32}\cdot\frac{16}{5} - \cdots \\ &= 1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} + \frac{1}{5} - \cdots \\ &= \sum_n \frac{(-1)^{n-1}}{n} \end{align}\]

This infinite sum converges to ln 2 (about 0.69 units of utility). However, Nover and Hájek point out that we would obtain a very different result if we were to rearrange the order in which the very same numbers are summed up. Here is one of many possible examples of this mathematical fact:

This is, of course, not news to mathematicians. The infinite sum produced by the Pasadena game is known as the alternating harmonic series , which is a conditionally convergent series. (A series \(a_n\) is conditionally convergent if \(\sum_{j=1}^{\infty} a_n\) converges but \(\sum_{j=1}^{\infty} \lvert a_n\rvert\) diverges.) Because of a theorem known as the Riemann rearrangement theorem, we know that if an infinite series is conditionally convergent, then its terms can always be rearranged such the sum converges to any finite number, or to \(+\infty\) or to \(-\infty\).

Nover and Hájek’s point is that it seems arbitrary to sum up the terms in the Pasadena game in the temporal order produced by the coin flips. To see why, it is helpful to imagine a slightly modified version of the game. In their original paper, Nover and Hájek ask us to imagine that:

We toss a fair coin until it lands heads for the first time. We have written on consecutive cards your pay-off for each possible outcome. The cards read as follows: (Top card) If the first =heads is on toss #1, we pay you $2. […] By accident, we drop the cards, and after picking them up and stacking them on the table, we find that they have been rearranged. No matter, you say—obviously the game has not changed, since the pay-off schedule remains the same. The game, after all, is correctly and completely specified by the conditionals written on the cards, and we have merely changed the order in which the conditions are presented. (Nover and Hájek 2004: 237–239)

Under the circumstances described here, we seem to have no reason to prefer any particular order in which to sum up the terms of the infinite series. So is the expected value of Pasadena game \(\ln 2\) or \(\frac{1}{2}(\ln 2)\) or \(\frac{1}{3}\) or \(-\infty\) or 345.68? All these suggestions seem equally arbitrary. Moreover, the same holds true for the Altadena game, in which every payoff is increased by one dollar. The Altadena game is clearly better than then Pasadena game, but advocates of expected utility theory seem unable to explain why.

The literature on the Pasadena game is extensive. See, e.g., Hájek and Nover (2006), Fine (2008), Smith (2014), and Bartha (2016). A particularly influential solution is due to Easwaran (2008). He introduces a distinction between a strong and a weak version of the expected utility principle, inspired by the well-known distinction between the strong and weak versions of the law of large numbers. According to the strong law of large numbers, the average utility of a game converges to its expected utility with probability one as the number of iterations goes to infinity. The weak law of large numbers holds that for a sufficiently large set of trials the probability can be made arbitrarily small that that the average utility will not differ from the expected utility by more than some small pre-specified amount. So according to the weak expected utility principle,

by fixing in advance a high enough number of n plays, the average payoff per play can be almost guaranteed to be arbitrarily close to ln 2,

while the strong version of the principle entails that

if one player keeps getting to decide whether to play again or quit, then she can almost certainly guarantee as much profit as she wants, regardless of the (constant) price per play. (Easwaran 2008: 635)

Easwaran’s view is that the weak expected utility principle should guide the agent’s choice and that the fair price to pay is ln 2.

However, Easwaran’s solution cannot be generalized to other games with slightly different payoff schemes. Bartha (2016: 805) describes a version of the Pasadena game that has no expected value. In this game, the Arroyo game, the player wins \(-1^{n+1}(n+1)\) with probability \(p_n = \frac{1}/{(n+1)}\). If we calculate the expected utility in the order in which the outcomes are produced, we get the same result as for the Pasadena game: \(1 - \frac{1}{2} + \frac{1}{3} - \frac{1}{4} \cdots\) For reasons explained (and proved) by Bartha, the Arroyo game has no weak expected utility.

It is also worth keeping in mind that Pasadena-like scenarios can arise in non-probabilistic contexts (see Peterson 2013). Imagine, for instance, an infinite population in which the utility of individual number j is \(\frac{(-1)^{j-1}}{j}\). What is the total utility of this population? Or imagine that you are the proud owner of a Jackson Pollock painting. An art dealer tells you the overall aesthetic value of the painting is the sum of some of its parts. You number the points in the painting with arbitrary numbers 1, 2, 3, … (perhaps by writing down the numbers on cards and then dropping all cards on the floor); the aesthetic value of each point j is \(\frac{(-1)^{j-1}}{j}\). What is the total aesthetic value of the painting? These examples are non-probabilistic versions of the Pasadena problem, to which the expected utility principle is inapplicable. There is no uncertainty about any state of nature; the decision maker knows for sure what the world is like. This means that Easwaran’s distinction between weak and strong expectations is not applicable.

Although some of these problems may appear to be somewhat esoteric, we cannot dismiss them. All Pasadena-like problems are vulnerable to the same contagion problem as the St Petersburg game (see section 2 ). Hájek and Smithson offer the following colorful illustration:

You can choose between pizza and Chinese for dinner. Each option’s desirability depends on how you weigh probabilistically various scenarios (burnt pizza, perfectly cooked pizza,… over-spiced Chinese, perfectly spiced Chinese…) and the utilities you accord them. Let us stipulate that neither choice dominates the other, yet it should be utterly straightforward for you to make a choice. But it is not if the expectations of pizza and Chinese are contaminated by even a miniscule [sic] assignment of credence to the Pasadena game. If the door is opened to it just a crack, it kicks the door down and swamps all expected utility calculations. You cannot even choose between pizza and Chinese. (Hájek and Smithson 2012: 42, emph. added.)

Colyvan (2006) suggests that we should bite the bullet on the Pasadena game and accept that it has no expected utility. The contagion problem shows that if we were to do so, we would have to admit that the principle of maximizing expected utility would be applicable to nearly no decisions. Moreover, because the contagion problem is equally applicable to all games discussed in this entry (St. Petersburg, Pasadena, Arroyo, etc.) it seems that all these problems may require a unified solution.

For hundreds of years, decision theorists have agreed that rational agents should maximize expected utility. The discussion has mostly been focused on how to interpret this principle, especially for choices in which the causal structure of the world is unusual. However, until recently no one has seriously questioned that the principle of maximizing expected utility is the right principle to apply. The rich and growing literature on the many puzzles inspired by the St. Petersburg paradox indicate that this might have been a mistake. Perhaps the principle of maximizing expected utility should be replaced by some entirely different principle?

  • Alexander, J. M., 2011, “Expectations and Choiceworthiness”, Mind , 120(479): 803–817. doi:10.1093/mind/fzr049
  • Arrow, Kenneth J., 1970, Essays in the Theory of Risk-Bearing , Amsterdam: North-Holland.
  • Aumann, Robert J., 1977, “The St. Petersburg Paradox: A Discussion of Some Recent Comments”, Journal of Economic Theory , 14(2): 443–445. doi:10.1016/0022-0531(77)90143-0
  • Bartha, Paul, 2007, “Taking Stock of Infinite Value: Pascal’s Wager and Relative Utilities”, Synthese , 154(1): 5–52.
  • Bartha, Paul, Barker, John and Hájek, Alan, 2014, “Satan, Saint Peter and Saint Petersburg: Decision Theory and Discontinuity at Infinity”, Synthese , 191(4): 629–660.
  • Bartha, Paul F. A., 2016, “Making Do Without Expectations”, Mind , 125(499): 799–827. doi:10.1093/mind/fzv152
  • Bassett, Gilbert W., 1987, “The St. Petersburg Paradox and Bounded Utility”, History of Political Economy , 19(4): 517–523. doi:10.1215/00182702-19-4-517
  • Bernoulli, Daniel, 1738 [1954], “Specimen Theoriae Novae de Mensura Sortis”, Commentarii Academiae Scientiarum Imperialis Petropolitanae , 5: 175–192. English translation, 1954, “Exposition of a New Theory on the Measurement of Risk”, Econometrica , 22(1): 23–36. doi:10.2307/1909829
  • Bernoulli, Jakob, 1975, Die Werke von Jakob Bernoulli , Band III, Basel: Birkhäuser. A translation from this by Richard J. Pulskamp of Nicolas Bernoulli’s letters concerning the St. Petersburg Game is available online .
  • Briggs, Rachael, 2015, “Costs of Abandoning the Sure-Thing Principle”, Canadian Journal of Philosophy , 45(5–6): 827–840. doi:10.1080/00455091.2015.1122387
  • Brito, D.L, 1975, “Becker’s Theory of the Allocation of Time and the St. Petersburg Paradox”, Journal of Economic Theory , 10(1): 123–126. doi:10.1016/0022-0531(75)90067-8
  • Buchak, Lara, 2013, Risk and Rationality , New York: Oxford University Press. doi:10.1093/acprof:oso/9780199672165.001.0001
  • –––, 2014, “Risk and Tradeoffs”, Erkenntnis , 79(S6): 1091–1117. doi:10.1007/s10670-013-9542-4
  • Buffon, G. L. L., 1777, “Essai d’Arithmdéétique Motale”, in Suppléments à l’Histoire Naturelle . Reprinted in Oeuvres Philosophiques de Buffon , Paris, 1954.
  • Chalmers, David J., 2002, “The St. Petersburg Two-Envelope Paradox”, Analysis , 62(2): 155–157. doi:10.1093/analys/62.2.155
  • Chen, Eddy Keming and Daniel Rubio, forthcoming, “Surreal Decisions”, Philosophy and Phenomenological Research , First online: 5 June 2018. doi:10.1111/phpr.12510
  • Colyvan, Mark, 2006, “No Expectations”, Mind , 115(459): 695–702. doi:10.1093/mind/fzl695
  • –––, 2008, “Relative Expectation Theory”:, Journal of Philosophy , 105(1): 37–44. doi:10.5840/jphil200810519
  • Colyvan, Mark and Alan Hájek, 2016, “Making Ado Without Expectations”:, Mind , 125(499): 829–857. doi:10.1093/mind/fzv160
  • Cowen, Tyler and Jack High, 1988, “Time, Bounded Utility, and the St. Petersburg Paradox”, Theory and Decision , 25(3): 219–223. doi:10.1007/BF00133163
  • Dutka, Jacques, 1988, “On the St. Petersburg Paradox”, Archive for History of Exact Sciences , 39(1): 13–39. doi:10.1007/BF00329984
  • Easwaran, Kenny, 2008, “Strong and Weak Expectations”, Mind , 117(467): 633–641. doi:10.1093/mind/fzn053
  • Fine, Terrence L., 2008, “Evaluating the Pasadena, Altadena, and St Petersburg Gambles”, Mind , 117(467): 613–632. doi:10.1093/mind/fzn037
  • Hájek, Alan, 2014, “Unexpected Expectations”, Mind , 123(490): 533–567. doi:10.1093/mind/fzu076
  • Hájek, Alan and Harris Nover, 2006, “Perplexing Expectations”, Mind , 115(459): 703–720. doi:10.1093/mind/fzl703
  • –––, 2008, “Complex Expectations”, Mind , 117(467): 643–664. doi:10.1093/mind/fzn086
  • Hájek, Alan and Michael Smithson, 2012, “Rationality and Indeterminate Probabilities”, Synthese , 187(1): 33–48. doi:10.1007/s11229-011-0033-3
  • Isaacs, Yoaav, 2016, “Probabilities Cannot Be Rationally Neglected”, Mind , 125(499): 759–762. doi:10.1093/mind/fzv151
  • Jeffrey, Richard C., 1983, The Logic of Decision , 2nd edition, Chicago: University of Chicago Press.
  • Jordan, Jeff, 1994, “The St. Petersburg Paradox and Pascal’s Wager”, Philosophia , 23(1–4): 207–222. doi:10.1007/BF02379856
  • Joyce, James M., 1999, The Foundations of Causal Decision Theory , Cambridge: Cambridge University Press.
  • Lauwers, Luc and Peter Vallentyne, 2016, “Decision Theory without Finite Standard Expected Value”, Economics and Philosophy , 32(3): 383–407. doi:10.1017/S0266267115000334
  • Linnebo, Øystein and Stewart Shapiro, 2019, “Actual and Potential Infinity: Actual and Potential Infinity”, Noûs , 53(1): 160–191. doi:10.1111/nous.12208
  • Luce, R. Duncan, 1959, “On the Possible Psychophysical Laws”, Psychological Review , 66(2): 81–95. doi:10.1037/h0043178
  • McClennen, Edward F., 1994, “Pascal’s Wager and Finite Decision Theory”, in Gambling on God: Essays on Pascal’s Wager , Jeff Jordan (ed.), Boston: Rowman & Littlefield, 115–138.
  • McCutcheon, Randall G., 2021, “How to co-exist with nonexistent expectations”, Synthese , 198(3): 2783–2799.
  • Menger, Karl, 1934 [1979], “Das Unsicherheitsmoment in der Wertlehre: Betrachtungen im Anschluß an das sogenannte Petersburger Spiel”, Zeitschrift für Nationalökonomie , 5(4): 459–485. Translated, 1979, as “The Role of Uncertainty in Economics”, in Menger’s Selected Papers in Logic and Foundations, Didactics, Economics , Dordrecht: Springer Netherlands, 259–278. doi:10.1007/BF01311578 (de) doi:10.1007/978-94-009-9347-1_25 (en)
  • Nover, Harris and Alan Hájek, 2004, “Vexing Expectations”, Mind , 113(450): 237–249. doi:10.1093/mind/113.450.237
  • Peterson, Martin, 2011, “A New Twist to the St. Petersburg Paradox”:, Journal of Philosophy , 108(12): 697–699. doi:10.5840/jphil20111081239
  • –––, 2013, “A Generalization of the Pasadena Puzzle: A Generalization of the Pasadena Puzzle”, Dialectica , 67(4): 597–603. doi:10.1111/1746-8361.12046
  • –––, 2009 [2017], An Introduction to Decision Theory , Cambridge: Cambridge University Press; second edition 2017. doi:10.1017/CBO9780511800917 doi:10.1017/9781316585061
  • –––, 2019, “Interval Values and Rational Choice”, Economics and Philosophy , 35(1): 159–166. doi:10.1017/S0266267118000147
  • Ramsey, Frank Plumpton, 1926 [1931], “Truth and Probability”, printed in The Foundations of Mathematics and Other Logical Essays , R. B. Braithwaite (ed.), London: Kegan Paul, Trench, Trubner & Co., 156–198. Reprinted in Philosophy of Probability: Contemporary Readings , Antony Eagle (ed.), New York: Routledge, 2011: 52–94. [ Ramsey 1926 [1931] available online ]
  • Russell, Jeffrey Sanford and Isaacs, Yoaav, 2021, “Infinite Prospects”, Philosophy and Phenomenological Research , 103(1): 178–198.
  • Samuelson, Paul A., 1977, “St. Petersburg Paradoxes: Defanged, Dissected, and Historically Described”, Journal of Economic Literature , 15(1): 24–55.
  • Savage, Leonard J., 1954, The Foundations of Statistics , (Wiley Publications in Statistics), New York: Wiley. Second edition, Courier Corporation, 1974.
  • Skala, Heinz J., 1975, Non-Archimedean Utility Theory , Dordrecht: D. Reidel.
  • Smith, Nicholas J. J., 2014, “Is Evaluative Compositionality a Requirement of Rationality?”, Mind , 123(490): 457–502. doi:10.1093/mind/fzu072
  • von Neumann, John and Oskar Morgenstern, 1947, Theory of Games and Economic Behavior , second revised edtion, Princeton, NJ: Princeton University Press.
  • Weirich, Paul, 1984, “The St. Petersburg Gamble and Risk”, Theory and Decision , 17(2): 193–202. doi:10.1007/BF00160983
  • Williamson, Timothy, 2007, “How Probable Is an Infinite Sequence of Heads?”, Analysis , 67(295): 173–180. doi:10.1111/j.1467-8284.2007.00671.x
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.

[Please contact the author with suggestions.]

decision theory | hedonism | infinity | Pascal’s wager | probability, interpretations of | rational choice, normative: expected utility | space and time: supertasks | statistics, philosophy of

Copyright © 2023 by Martin Peterson < martinpeterson @ tamu . edu >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

IMAGES

  1. Lab 14 Enzyme activity experiment

    enzymology lab experiments

  2. Liver Enzyme Lab

    enzymology lab experiments

  3. Potato Catalase Enzyme Experiment (UPDATE)

    enzymology lab experiments

  4. Practical Enzymology

    enzymology lab experiments

  5. Experiment in Enzymes || Biochemistry

    enzymology lab experiments

  6. Enzyme Lab Setup Demo

    enzymology lab experiments

VIDEO

  1. Western Blotting ! Membrane Transfer ! Virtual Lab! Cell biology ! PICME Labs!

  2. Primary Cell Culture for Chick embryo ! Virtual Lab! Cell biology ! PICME Labs!

  3. Max's Vlog: look around a biosciences teaching lab

  4. Enzymology ( 2nd vedio)

  5. In Silico Primer Desigining Online Class Part 5

  6. 9. CLINICAL ENZYMOLOGY

COMMENTS

  1. PDF EXPERIMENTAL ENZYMOLOGY (BCH 322) Lab Manual

    A few grams of crystalline phenylthiourea should be available for the experiment. Potatoes. Homogenizer. Cheesecloth. Water baths at 37 and 70 ºC. Container of crushed ice. 1.4 Method: To detect and follow the progress of the reaction in this experiment a simple, qualitative method will be used.

  2. Enzymes: principles and biotechnological applications

    Enzymes: principles and biotechnological applications - PMC

  3. Enzymes

    This lab explores how enzymes catalyze chemical reactions and how environmental factors affect their activity. You will use catalase from potato to observe the breakdown of hydrogen peroxide and design experiments to test the effects of temperature, pH, and substrate concentration on enzyme activity.

  4. PDF Experiment 10

    Experiment 10 Enzymes

  5. 8 Enzymes

    Learn how to measure the activity of catalase, an enzyme that breaks down hydrogen peroxide, under different conditions of temperature, concentration and pH. Follow the experimental procedures and record your observations in tables and figures.

  6. New tool drastically speeds up the study of enzymes

    HT-MEK is a technique that enables thousands of enzyme experiments to run simultaneously on a single polymer chip. It could reveal how enzymes work beyond their active sites and perform remarkable ...

  7. PDF Enzymes: A Practical Introduction to Structure, Mechanism, and Data

    1 A Brief History of Enzymology 1 1.1 Enzymes in Antiquity / 2 1.2 Early Enzymology / 3 1.3 The Development of Mechanistic Enzymology / 4 1.4 Studies of Enzyme Structure / 5 1.5 Enzymology Today / 7 1.6 Summary / 8 References and Further Reading / 10 2 Chemical Bonds and Reactions in Biochemistry 11 2.1 Atomic and Molecular Orbitals / 11

  8. Laboratory Guide to Enzymology

    LABORATORY GUIDE TO ENZYMOLOGY An accessible guide to understanding the foundations of enzymology at its application in drug discovery Enzymes are highly specialized proteins necessary for performing specific biochemical reactions essential for life in all organisms. In disease, the functioning of these enzymes can become altered and, therefore, enzymes represent a large class of key targets ...

  9. Explore Enzyme Activity with Toothpicks

    In Experiment 1 you should have noticed that the enzymatic activity decreased over the course of the reaction. This can be explained by the fact that the substrate concentration (number of unbroken toothpicks) continuously decreased over time and it was more difficult for the enzyme to find available substrate (unbroken toothpicks) to react with.

  10. PDF BCH 322 [Experimental Enzymology]

    [Experimental Enzymology] Nora Aljebrin ... BCH 322 ! A set of special experiments designed to study parameters of enzyme activity, activation & inhibition, and isolation & characterization of enzymes. Course Description Title of the Experiments ... Conducting the experiment 5 Marks Report 15 Marks Quiz 15 Marks Midterm (25 Marks) Practical 15 ...

  11. PDF BCH 322 ( Enzymology Lab.) Course Outline

    The lab notes should be written in a manner that other people could understand them. Excellent note taking in the lab is an important skill that can be learned with little effort and practice. Guidelines to be followed. Carry your notebook to the lab for each experiment. Number all the pages in sequential order.

  12. PDF 6 10 Miller

    A chapter from the 1992 Proceedings of the Association for Biology Laboratory Education (ABLE) that describes eight simple enzyme experiments using common materials. The experiments include amylase, catalase, catecholase, invertase, papain, pectinase, pepsin, and rennin.

  13. A laboratory work to introduce biochemistry undergraduate students to

    First, laboratory instructors discuss with the students the objectives of the experiment, the details of the experimental procedure, the importance of controls and data processing. Special emphasis is given to the accurate control of the reaction time, as this is a fixed time assay in which the colorimetric reagent stops the reaction.

  14. 13 Enzymes Lab Report Activities

    Here are 13 enzyme lab report activities for you to enjoy. 1. Plant and Animal Enzyme Lab. This lab explores an enzyme that is common to both plants and animals. Firstly, students will explore important concepts about enzymes; including what enzymes are, how they help cells, and how they create reactions. During the lab, students will look at ...

  15. Looking Back: A Short History of the Discovery of Enzymes and How They

    Early enzymology‐demystifying life. In 1833, diastase (a mixture of amylases) was the first enzyme to be discovered, [2] ... If this experiment is repeated for all four nucleotides, and the products are separated by size, the sequence of the DNA template can be inferred. Modern Sanger sequencing includes all four ddNTPs in a single sequencing ...

  16. FST 123

    Learn about the physical, chemical and kinetic properties of enzymes and their applications in food science, nutrition and biology. This course is for students who have completed BIS 102 and 103 and requires two midterms and a final exam.

  17. The St. Petersburg paradox despite risk-seeking preferences: an

    Real-money experiments with positive prize St. Petersburg lotteries produced data that are inconsistent with risk-neutrality but consistent with risk-aversion (Cox and Sadiraj 2009; Neugebauer 2010). Data from the first experiment in this paper implement a setting in which subjects are believed to be risk-loving.

  18. Quantum Experiment Could Finally Reveal The Elusive Gravity Particle

    This new experiment could help, ironically by returning to some of the earliest experiments in the field. Starting in the 1960s, physicist Joseph Weber tried to find gravitational waves using solid aluminum cylinders, which were suspended from steel wire to isolate them from background noise. If gravitational waves swept past, the idea goes, it would set off vibrations in the cylinders that ...

  19. PDF Introduction to Enzymology

    A PDF file that explains the basics of enzymology, such as factors affecting enzyme activity, substrate specificity, and enzyme nature. It also provides a lab experiment on polyphenol oxidase, a copper-containing enzyme that catalyzes the browning of fruits and vegetables.

  20. The Irish schools experiment: a year for students to find themselves

    That was the revolutionary idea behind Ireland's "transition year" — an experiment piloted in 1974 and adopted as part of the curriculum in most schools, state and private, since 1994.

  21. What Is the St. Petersburg Paradox?

    Learn about the St. Petersburg Paradox, a game that involves flipping a coin and doubling your winnings for each head. Find out why the expected value of the game is infinite and how to resolve the paradox with probabilities and payouts.

  22. The St. Petersburg Paradox

    the St. Petersburg paradox

  23. PDF Saint Petersburg State University Emission Monitoring Mobile Experiment

    Experiments in 2019 FTIR HYSPLIT rFTIR-HYSPLIT=0.94 (2019) rFTIR-HYSPLIT=0.78 (2020) ODIAC*** database was used as a priori info on the CO2 emissions. ... - preparation of an aircraft experiment for studying the 2-D crossection of the city plume. References: *Draxler, R. R. and Hess, G.D.: An overview of the HYSPLIT_4 modelling system for