Focal Length of a Concave Mirror and a Convex Lens using U-V Method

1 introduction.

  • For an appropriate object distance u , find the image distance v . Measure u and v .

1.1 Concave Mirror

SVG-Viewer needed.

  • The graph is hyperbola with asymptotes at u = f and v = f i.e., for the object placed at F the image is formed at infinity and for the object placed at infinity the image is formed at F.
  • The values of u and v are equal at point C, which corresponds to u = v = 2 f . This point is the intersection of u - v curve and the straight line v = u . This represents centre of curvature of the mirror.

1.2 Convex Lens

  • The graph is hyperbola with asymptotes at u = - f and v = f i.e., for the object placed at F the image is formed at infinity and for the object at infinity the image is formed at F.
  • At the point C, the values of u and v are equal in magnitude but opposite in sign i.e., v = - u = 2 f . This point is the intersection of u - v curve and the straight line v = - u . If an object is placed at a distance 2 f from the pole then its image is formed at a distance 2 f from the pole (on the other side).

2 IIT JEE Solved Problems

  • x < f
  • f < x < 2 f
  • x > 2 f
  • 0 . 5 ± 0 . 1
  • 0 . 5 ± 0 . 05
  • a concave mirror of suitable focal length.
  • a convex mirror of suitable focal length.
  • a convex lens of focal length less than 0 . 25 m.
  • a concave lens of suitable focal length.
  • Convex lens
  • Concave lens
  • Convex mirror
  • Concave mirror
  • half of the image will disappear.
  • complete image will be formed.
  • intensity of the image will increase.
  • intensity of the image will decrease.

3 Experiment Details

3.1 procedure.

  • Fix the given concave mirror on the stand. Arrange the screen on the table so that the image of the distant object is obtained on it. Measure the distance between mirror and screen using a metre scale. This distance is the approximate focal length ( f ) of the mirror.
  • Set the values of u ranging from 1 . 5 f to 2 . 5 f . Divide the range into a number of equal steps.
  • Place the mirror in front of an illuminated object. Now, fix the mirror at the distance u (which is obtained as 1 . 5 f ).
  • Place the screen on the table facing the mirror in such a way that the reflected image lies on the screen. Keeping the distance between object and mirror fixed, adjust the position of screen in order to get the clear image of the object. Remove the parallax to get accurate position of the image.
  • Measure the distance between mirror and object, as well as mirror and screen. Take these values as u and v respectively. Calculate the focal length of the given concave mirror by using the relation, f = uv∕ ( u + v ).
  • Repeat the experiment for different values of u (up to 2 . 5 f ) and in each time, measure v and record it in the tabular column. Calculate the focal length ( f ) of the concave mirror each time.
  • Calculate the mean of all focal lengths to get the correct focal length of the given concave mirror.
  • The focal length of the mirror can also be measured graphically by plotting graphs between u and v , and 1 ∕u and 1 ∕v .

3.2 Precautions

  • The principal axis of the mirror should be horizontal and parallel to central line of the optical bench.
  • The object should be vertical.
  • Index correction for u and v should be applied.

3.3 Data Table

S.No. 1 1 f
(cm)(cm)(cm )(cm )(cm)

4 Exercise Problems

  • d∕ ( m 1 - m 2 )
  • d∕ ( m 1 + m 2 )
  • dm 1 ∕m 2
  • dm 2 ∕m 1
  • md ∕ ( m + 1) 2
  • md ∕ ( m + 1)
  • md ∕ ( m - 1) 2
  • md ∕ ( m - 1)
  • ( x + y ) ∕ 2
  • 36 cm
  • 72 cm
  • 18 cm
  • 9 cm
  • 4 cm
  • 20 cm
  • 40 cm
  • 30 cm
  • 60 cm
  • x = +20 cm
  • x = - 30 cm
  • x = - 10 cm
  • x = 0 cm
  • d∕ 2
  • d∕ 3
  • d∕ 4
  • virtual image is always larger in size
  • real image is always smaller in size
  • real image is always larger in size
  • real image may be larger or smaller in size
  • 40.5 cm
  • -40 cm
  • -45 cm
  • u = - 10   cm , f = 20   cm
  • u = - 20   cm , f = - 30   cm
  • u = - 45   cm , f = - 10   cm
  • u = - 60   cm , f = 30   cm
  • must be less than 10 cm
  • must be greater than 20 cm
  • must not be be greater than 20 cm
  • must not be less than 10 cm
  • The mirror equation is one which connects u , v , and f .
  • Real inverted image with same size is obtained if the object is placed on the centre of curvature of a concave mirror.
  • The image formed in concave mirror is always real.
  • Concave mirrors have reflecting inner surface.

5 Do it Yourself

5.1 focal length of a concave mirror.

  • Firstly, find the approximate focal length ( f ) of a concave mirror. You can do this by focusing a distant object like sun. Fix mirror vertically on a V-stand. Draw a long straight line on a table and place the mirror stand on it. The pole of the mirror should be exactly above the line.
  • Light a candle and place it on one end of the line. The flame of the candle should be at the same height as the pole of the mirror.
  • Fix the screen on its stand. The screen should be vertical. Place the screen between the candle and mirror.
  • Analyse the nature of the image by moving the screen and/or mirror.
  • For three different values of u , find the value of v . Calculate f by substituting in mirror formula.

5.2 Focal Length of the Convex Lens

  • Firstly, find the approximate focal length ( f ) of the convex lens. Then fix it vertically on a V-stand. Draw a long straight line on a table and place the lens in the middle. The pole of the lens should be exactly above the line.
  • Light a candle and place it on one end of the line. The flame of the candle should be at the same height as the pole of the lens.
  • Fix the screen on its stand. The screen should be vertical. Place the screen on other side of the lens.
  • Analyse the nature of the image by moving the screen and/or lens.
  • For three different values of u , find the value of v . Calculate f by substituting in the lens formula.

6 More…

6.1 plane mirror method.

  • virtual and at a distance of 16 cm from the mirror.
  • real and at a distance of 16 cm from the mirror.
  • virtual and at a distance of 20 cm from the mirror.
  • real and at a distance of 20 cm from the mirror.

6.2 Displacement Method or Two Position Method

6.3 the minimum distance method.

[1]     Focal length of a concave mirror by u-v method. https://goo.gl/PqDHRp . Good webpage at www.learncbse.in.

[2]     Focal length of a convex lens by u-v method. https://goo.gl/PT39i1 . Good webpage at www.learncbse.in.

[3]     Focal length of concave mirror by u-v method. https://goo.gl/RqsX3D . Amrita Olabs Webpage.

[4]     Focal length of concave mirror by u-v method. https://youtu.be/j8N1Z6338UQ . Amrita OLabs YouTube Video.

[5]     Focal length of the concave mirror. https://youtu.be/5DbdWFAs8EI . Good YouTube Video from Edunovas.

[6]     Focal length of the convex lens. https://youtu.be/f0bi0yl7uZU . Good YouTube Video from Edunovas.

[7]     The measurement of the focal length of a lens. https://goo.gl/Rxy3gr . This webpage gives various methods to find focal length of a lens.

[8]     NCERT book to find the focal length of a convex lens. http://ncert.nic.in/ncerts/l/lelm305.pdf . Download PDF from here.

[9]     HC Verma. Concepts of Physics , volume Part 1. Bharati Bhawan, 1992.

JEE Physics Solved Problems in Mechanics

Talk to our experts

1800-120-456-456

Determination of Focal Lengths of Concave Mirror and Convex Lens

  • Science Lab Manuals
  • Determination Of Focal Lengths Of Concave Mirror And Convex Lens

ffImage

Physics Experiment - An Introduction to Determination of Focal Lengths of Concave Mirror and Convex Lens

We all have used a magnifying glass at some point or the other to enlarge the object we are viewing. Why is it that at a particular point from the glass the object is magnified to the maximum? We have often seen in our homes Dish TVs and in the pictures of satellites the antennae has a curved surface with a large bulb at some distance from it. Likewise, there are many instances of starting a fire using a magnifying glass by converging the sun's rays at a point on the paper, keeping at a fixed distance. This special distance is known as the focal length .

In this simple experiment, we are going to learn how to determine the focal length of a concave mirror with sign and the focal length of convex lens with sign .

Table of Contents

Observations.

To determine the focal length of Concave mirror and Convex lens with sign.

Apparatus Required

A concave mirror

A convex lens

A white cardboard

One mirror holder

One lens holder

One image holder

A measuring scale

The Focal length (f) of curved mirrors and lenses is the distance from their optical centers to a point where the light rays meet after reflection/ refraction. The focal length of an optical device determines the capacity of the device to reflect (for mirrors) or refract (for lenses) the light rays and is equal to half the radius of curvature .

In a concave mirror, a real and inverted image is formed of the reflected light rays from the object on the same side of it. In a convex lens, the image is formed using a similar mechanism and the reflected light rays are refracted through the lens. The image is formed on the other side of the object.

Image formation by a concave mirror

Image formation by a concave mirror

Image formation by a convex mirror

Image formation by a convex mirror

In this experiment, we are going to determine the focal lengths (f) of both the devices using the above concept by obtaining the real and inverted image of a far object on a screen.

Clean the surfaces of the mirror and lens using a solution of vinegar and water in the ratio 1:4.

Note down the least count of the meter scale.

First clamp the full length of mirror with a stand on the mirror holder and keep its reflecting surface towards one of the windows  (see Image 3 for reference).

Clamp the white cardboard on the image holder and place it on the scale between the mirror and the window (see Image 3 for reference).

By adjusting the positions of the mirror and the cardboard on the scale, try to obtain the image of an object from the outside of the window (such as a tree) on the cardboard. The image will be inverted in nature (see Image 3 for reference).

Adjust both positions till you get the sharpest image.

Note down the distances of both the sliders using meter scale (see Image 3 for reference).

Repeat the step two more times. Note the observations of the distances in a table.

Now remove the mirror with its holder and clamp the convex lens on its holder (see Image 4 for reference).

Place the lens with its holder on the scale between the window and the cardboard (see Image 4 for reference).

Try to obtain the same image on the screen by varying both the holders. Note the positions corresponding to the sharpest images so obtained (see Image 4 for reference).

Repeat the procedure two more times. Note the readings in a separate table.

Experimental setup for the determination of the focal length of a concave mirror.

Experimental setup for the determination of the focal length of a concave mirror.

Experimental setup for the determination of the focal length of a thin convex lens.

Experimental setup for the determination of the focal length of a thin convex lens.

We'll be tabulating the various positions of the device and the screen. The focal length is given by the distance between the device and the screen in each case.

Observation Table

Sr-No

Position Of Mirror

|M| (cm)

Position Of Screen

|v| (cm)

Focal Length

   \[f = |M - v|\] (cm)

1



 

2



 

3



 

(i) Convex lens

Sr-No

Position Of Lens |L| (cm)

Position Of Screen

|v| (cm)

Focal Length

\[f= |v - L| (cm)\]

1



 

2



   

3



   

Average focal length of concave mirror \[ = ............\;cm\]

Average focal length of convex lens \[= …………\;cm\]

Precautions

The rays of light should be directly incident from the far object on the optical devices without any obstacle in between.

All the holders and stands should be straight and parallel.

Optical devices and screens should be in the same straight line in each case.

The surfaces of devices should be properly cleaned.

Positions should be noted only after obtaining the sharpest image on the screen.

Lab Manual Questions

How will you distinguish between a convex and concave lens?

Ans: A convex lens is the one which is thicker at the middle and thinner at the edges. When placed in front of a distant light source, a convex lens converges all the light from the source to one single point on a side away from the source. On the other hand, a concave lens is the one which is thinner at the middle and thicker at the edges. When placed in front of a distant light source, a convex lens diverges all the light from the source uniformly in all directions on a side away from the source.

Can this method be used to find the approximate focal length of a concave lens?

Ans: No, this method cannot be used to find the approximate focal length of a concave lens. This is because a concave lens does not form a real image of an object placed at infinity. So, the observer is incapable of producing the image on a screen and hence measures its distance.

What type of mirror is used in a torch? Give reasons.

Ans: A concave mirror is used in the torch. This is because when a glowing bulb is placed at the focus of a concave mirror, its light is reflected from the mirror as a parallel beam.

What type of mirror is used as a shaving mirror? Why?

Ans: A concave mirror is used as a shaving mirror. This is because when an object is placed very close to such a mirror, a virtual and magnified image of the object is observed.

Viva Questions

Why don't full length mirrors have a radius of curvature?

Ans: Full length mirrors are plain mirrors which are not a part of a sphere. Hence, they have no associated radius of curvature and are said to be infinite. Likewise, they have no associated characteristics as that of curved mirrors/ lenses such as principal axis, principal focus and optical center.

How are spherical mirrors different from plain mirrors?

Ans: Spherical mirrors are made up of curved surfaces and have associated center of curvature, focus, optical center and principal axis, enabling them to form a real image of the object. Plain mirrors do not have such characteristics.

Define the center of curvature of an optical device.

Ans: The center of curvature of an optical device is the center of the sphere of which the device is carved out as an arc. This is a point lying on the principal axis and is twice as far from the principal focal from the optical center of the device.

Which optical devices form virtual and erect images?

Ans: Convex mirror and concave lens produce virtual and erect images as the light rays reflected from the object do not actually meet at any given point, but only apparently meet to the observer. Such images cannot be obtained on the screen.

In which case the image formed by a concave mirror is virtual and erect?

Ans: When the object is placed between principal focus and pole, the image formed is virtual and erect and the light rays reflected from the object do not actually meet at any point.

How does the size of the image formed by a convex lens change as the object approaches the lens?

Ans: As the object approaches the lens, the size of the image so formed increases. This image is real and inverted in nature and forms on the other side of the lens. Hence, the magnification also increases. This finds a practical application in the magnifying glasses.

What is the working principle of optical mirrors?

Ans: Optical mirrors are based on the principle of reflection of light. When the light rays strike the object and get reflected, they are incident towards the mirror surface and follow laws of reflection. Hence, the light waves get reflected again and meet to form the image of the object.

What is the working principle of optical lenses?

Ans: Optical lenses are based on the principle of refraction of light. When the light rays strike the object and get reflected, they are incident towards the lens surface and follow Snell’s laws of refraction. Hence, the light waves get refracted and meet to form the image of the object.

How does the mirror formula differ from lens formula?

Ans: In mirror formula, the reciprocal of focal length is given by the sum of the individual reciprocals of the image distance and object distance. In the lens formula, the reciprocal of focal length is given by the difference of the individual reciprocals of the image distance and object distance.

Do lenses refract light of all frequencies?

Ans: Yes, optical lenses are capable of refracting lights of all frequencies of the electromagnetic spectrum.

Practical Based Questions

Optical lenses are based on:

Interference

Polarization

Which of the following always produces a virtual image?

Convex mirror

Concave mirror

Convex lens

None of the above

Which of the following is thicker at the middle but thinner at the ends?

Concave lens

For radius of curvature R and focal length f, which of the following relations is correct?

Which mirror is commonly used in our homes?

Full length mirror with stand

Inclined mirror

Which substance has the highest reflectivity?

Pick the field in which spherical mirrors do not find applications.

Vehicle headlights

Rear view mirrors

RADAR mirrors

Security mirrors

Which of the following is capable of igniting a fire using sun rays?

What is the focal length of a full length mirror with a stand?

1000 meters

 What is the sign of image distance in a convex lens?

Depends on object size

Depends on image size

From the above experiment, we can conclude that the concave mirror and convex lens produce the real and inverted image of an object. Curved mirrors are carved out of a glassy sphere and hence have certain spherical parameters, while plain mirrors do not have such.

We also learnt various concepts related to such mirrors and lenses and a practical way to determine the focal lengths of the same.

We hope to enlighten the reader about various concepts of the topic, serving as a motivation to further explore the opportunities in the field for time to come.

arrow-right

PhET Home Page

  • Sign in / Register
  • Administration
  • Edit profile

convex lens focal length experiment

The PhET website does not support your browser. We recommend using the latest version of Chrome, Firefox, Safari, or Edge.

Science Practicals 11 & 12

Search this blog, class 12 physics practical reading to find the focal length of a convex lens by plotting a graph between u and v or between 1/u and 1/v., apparatus required , theory .

Ray diagram

Procedure 

convex lens focal length experiment

Observations 

convex lens focal length experiment

Graph between -u & v
Graph between -1/u & 1/v

Precautions 

Sources of errors, post a comment.

Please do not enter any spam link in the comment box.

25.6 Image Formation by Lenses

Learning objectives.

By the end of this section, you will be able to:

  • List the rules for ray tracing for thin lenses.
  • Illustrate the formation of images using the technique of ray tracing.
  • Determine power of a lens given the focal length.

Lenses are found in a huge array of optical instruments, ranging from a simple magnifying glass to the eye to a camera’s zoom lens. In this section, we will use the law of refraction to explore the properties of lenses and how they form images.

The word lens derives from the Latin word for a lentil bean, the shape of which is similar to the convex lens in Figure 25.25 . The convex lens shown has been shaped so that all light rays that enter it parallel to its axis cross one another at a single point on the opposite side of the lens. (The axis is defined to be a line normal to the lens at its center, as shown in Figure 25.25 .) Such a lens is called a converging (or convex) lens for the converging effect it has on light rays. An expanded view of the path of one ray through the lens is shown, to illustrate how the ray changes direction both as it enters and as it leaves the lens. Since the index of refraction of the lens is greater than that of air, the ray moves towards the perpendicular as it enters and away from the perpendicular as it leaves. (This is in accordance with the law of refraction.) Due to the lens’s shape, light is thus bent toward the axis at both surfaces. The point at which the rays cross is defined to be the focal point F of the lens. The distance from the center of the lens to its focal point is defined to be the focal length f f of the lens. Figure 25.26 shows how a converging lens, such as that in a magnifying glass, can converge the nearly parallel light rays from the sun to a small spot.

Converging or Convex Lens

The lens in which light rays that enter it parallel to its axis cross one another at a single point on the opposite side with a converging effect is called converging lens.

Focal Point F

The point at which the light rays cross is called the focal point F of the lens.

Focal Length f f

The distance from the center of the lens to its focal point is called focal length f f .

The greater effect a lens has on light rays, the more powerful it is said to be. For example, a powerful converging lens will focus parallel light rays closer to itself and will have a smaller focal length than a weak lens. The light will also focus into a smaller and more intense spot for a more powerful lens. The power P P of a lens is defined to be the inverse of its focal length. In equation form, this is

The power P P of a lens is defined to be the inverse of its focal length. In equation form, this is

where f f is the focal length of the lens, which must be given in meters (and not cm or mm). The power of a lens P P has the unit diopters (D), provided that the focal length is given in meters. That is, 1 D = 1 / m 1 D = 1 / m , or 1 m − 1 1 m − 1 . (Note that this power (optical power, actually) is not the same as power in watts defined in Work, Energy, and Energy Resources . It is a concept related to the effect of optical devices on light.) Optometrists prescribe common spectacles and contact lenses in units of diopters.

Example 25.5

What is the power of a common magnifying glass.

Suppose you take a magnifying glass out on a sunny day and you find that it concentrates sunlight to a small spot 8.00 cm away from the lens. What are the focal length and power of the lens?

The situation here is the same as those shown in Figure 25.25 and Figure 25.26 . The Sun is so far away that the Sun’s rays are nearly parallel when they reach Earth. The magnifying glass is a convex (or converging) lens, focusing the nearly parallel rays of sunlight. Thus the focal length of the lens is the distance from the lens to the spot, and its power is the inverse of this distance (in m).

The focal length of the lens is the distance from the center of the lens to the spot, given to be 8.00 cm. Thus,

To find the power of the lens, we must first convert the focal length to meters; then, we substitute this value into the equation for power. This gives

This is a relatively powerful lens. The power of a lens in diopters should not be confused with the familiar concept of power in watts. It is an unfortunate fact that the word “power” is used for two completely different concepts. If you examine a prescription for eyeglasses, you will note lens powers given in diopters. If you examine the label on a motor, you will note energy consumption rate given as a power in watts.

Figure 25.27 shows a concave lens and the effect it has on rays of light that enter it parallel to its axis (the path taken by ray 2 in the figure is the axis of the lens). The concave lens is a diverging lens , because it causes the light rays to bend away (diverge) from its axis. In this case, the lens has been shaped so that all light rays entering it parallel to its axis appear to originate from the same point, F F , defined to be the focal point of a diverging lens. The distance from the center of the lens to the focal point is again called the focal length f f of the lens. Note that the focal length and power of a diverging lens are defined to be negative. For example, if the distance to F F in Figure 25.27 is 5.00 cm, then the focal length is f = –5.00 cm f = –5.00 cm and the power of the lens is P = –20 D P = –20 D . An expanded view of the path of one ray through the lens is shown in the figure to illustrate how the shape of the lens, together with the law of refraction, causes the ray to follow its particular path and be diverged.

Diverging Lens

A lens that causes the light rays to bend away from its axis is called a diverging lens.

As noted in the initial discussion of the law of refraction in The Law of Refraction , the paths of light rays are exactly reversible. This means that the direction of the arrows could be reversed for all of the rays in Figure 25.25 and Figure 25.27 . For example, if a point light source is placed at the focal point of a convex lens, as shown in Figure 25.28 , parallel light rays emerge from the other side.

Ray Tracing and Thin Lenses

Ray tracing is the technique of determining or following (tracing) the paths that light rays take. For rays passing through matter, the law of refraction is used to trace the paths. Here we use ray tracing to help us understand the action of lenses in situations ranging from forming images on film to magnifying small print to correcting nearsightedness. While ray tracing for complicated lenses, such as those found in sophisticated cameras, may require computer techniques, there is a set of simple rules for tracing rays through thin lenses. A thin lens is defined to be one whose thickness allows rays to refract, as illustrated in Figure 25.25 , but does not allow properties such as dispersion and aberrations. An ideal thin lens has two refracting surfaces but the lens is thin enough to assume that light rays bend only once. A thin symmetrical lens has two focal points, one on either side and both at the same distance from the lens. (See Figure 25.29 .) Another important characteristic of a thin lens is that light rays through its center are deflected by a negligible amount, as seen in Figure 25.30 .

A thin lens is defined to be one whose thickness allows rays to refract but does not allow properties such as dispersion and aberrations.

Take-Home Experiment: A Visit to the Optician

Look through your eyeglasses (or those of a friend) backward and forward and comment on whether they act like thin lenses.

Using paper, pencil, and a straight edge, ray tracing can accurately describe the operation of a lens. The rules for ray tracing for thin lenses are based on the illustrations already discussed:

  • A ray entering a converging lens parallel to its axis passes through the focal point F of the lens on the other side. (See rays 1 and 3 in Figure 25.25 .)
  • A ray entering a diverging lens parallel to its axis seems to come from the focal point F. (See rays 1 and 3 in Figure 25.27 .)
  • A ray passing through the center of either a converging or a diverging lens does not change direction. (See Figure 25.30 , and see ray 2 in Figure 25.25 and Figure 25.27 .)
  • A ray entering a converging lens through its focal point exits parallel to its axis. (The reverse of rays 1 and 3 in Figure 25.25 .)
  • A ray that enters a diverging lens by heading toward the focal point on the opposite side exits parallel to the axis. (The reverse of rays 1 and 3 in Figure 25.27 .)

Rules for Ray Tracing

  • A ray entering a converging lens parallel to its axis passes through the focal point F of the lens on the other side.
  • A ray entering a diverging lens parallel to its axis seems to come from the focal point F.
  • A ray passing through the center of either a converging or a diverging lens does not change direction.
  • A ray entering a converging lens through its focal point exits parallel to its axis.
  • A ray that enters a diverging lens by heading toward the focal point on the opposite side exits parallel to the axis.

Image Formation by Thin Lenses

In some circumstances, a lens forms an obvious image, such as when a movie projector casts an image onto a screen. In other cases, the image is less obvious. Where, for example, is the image formed by eyeglasses? We will use ray tracing for thin lenses to illustrate how they form images, and we will develop equations to describe the image formation quantitatively.

Consider an object some distance away from a converging lens, as shown in Figure 25.31 . To find the location and size of the image formed, we trace the paths of selected light rays originating from one point on the object, in this case the top of the person’s head. The figure shows three rays from the top of the object that can be traced using the ray tracing rules given above. (Rays leave this point going in many directions, but we concentrate on only a few with paths that are easy to trace.) The first ray is one that enters the lens parallel to its axis and passes through the focal point on the other side (rule 1). The second ray passes through the center of the lens without changing direction (rule 3). The third ray passes through the nearer focal point on its way into the lens and leaves the lens parallel to its axis (rule 4). The three rays cross at the same point on the other side of the lens. The image of the top of the person’s head is located at this point. All rays that come from the same point on the top of the person’s head are refracted in such a way as to cross at the point shown. Rays from another point on the object, such as her belt buckle, will also cross at another common point, forming a complete image, as shown. Although three rays are traced in Figure 25.31 , only two are necessary to locate the image. It is best to trace rays for which there are simple ray tracing rules. Before applying ray tracing to other situations, let us consider the example shown in Figure 25.31 in more detail.

The image formed in Figure 25.31 is a real image , meaning that it can be projected. That is, light rays from one point on the object actually cross at the location of the image and can be projected onto a screen, a piece of film, or the retina of an eye, for example. Figure 25.32 shows how such an image would be projected onto film by a camera lens. This figure also shows how a real image is projected onto the retina by the lens of an eye. Note that the image is there whether it is projected onto a screen or not.

The image in which light rays from one point on the object actually cross at the location of the image and can be projected onto a screen, a piece of film, or the retina of an eye is called a real image.

Several important distances appear in Figure 25.31 . We define d o d o to be the object distance, the distance of an object from the center of a lens. Image distance d i d i is defined to be the distance of the image from the center of a lens. The height of the object and height of the image are given the symbols h o h o and h i h i , respectively. Images that appear upright relative to the object have heights that are positive and those that are inverted have negative heights. Using the rules of ray tracing and making a scale drawing with paper and pencil, like that in Figure 25.31 , we can accurately describe the location and size of an image. But the real benefit of ray tracing is in visualizing how images are formed in a variety of situations. To obtain numerical information, we use a pair of equations that can be derived from a geometric analysis of ray tracing for thin lenses. The thin lens equations are

We define the ratio of image height to object height ( h i / h o h i / h o ) to be the magnification m m . (The minus sign in the equation above will be discussed shortly.) The thin lens equations are broadly applicable to all situations involving thin lenses (and “thin” mirrors, as we will see later). We will explore many features of image formation in the following worked examples.

Image Distance

The distance of the image from the center of the lens is called image distance.

Thin Lens Equations and Magnification

The image distance is taken to be positive for a real image and negative for a virtual image. The focal length is positive for converging lenses and negative for diverging lenses.

Example 25.6

Finding the image of a light bulb filament by ray tracing and by the thin lens equations.

A clear glass light bulb is placed 0.750 m from a convex lens having a 0.500 m focal length, as shown in Figure 25.33 . Use ray tracing to get an approximate location for the image. Then use the thin lens equations to calculate (a) the location of the image and (b) its magnification. Verify that ray tracing and the thin lens equations produce consistent results.

Strategy and Concept

Since the object is placed farther away from a converging lens than the focal length of the lens, this situation is analogous to those illustrated in Figure 25.31 and Figure 25.32 . Ray tracing to scale should produce similar results for d i d i . Numerical solutions for d i d i and m m can be obtained using the thin lens equations, noting that d o = 0.750 m and f = 0.500 m d o = 0.750 m and f = 0.500 m .

Solutions (Ray tracing)

The ray tracing to scale in Figure 25.33 shows two rays from a point on the bulb’s filament crossing about 1.50 m on the far side of the lens. Thus the image distance d i d i is about 1.50 m. Similarly, the image height based on ray tracing is greater than the object height by about a factor of 2, and the image is inverted. Thus m m is about –2. The minus sign indicates that the image is inverted.

The thin lens equations can be used to find d i d i from the given information:

Rearranging to isolate d i d i gives

Entering known quantities gives a value for 1 / d i 1 / d i :

This must be inverted to find d i d i :

Note that another way to find d i d i is to rearrange the equation:

This yields the equation for the image distance as:

Note that there is no inverting here.

The thin lens equations can be used to find the magnification m m , since both d i d i and d o d o are known. Entering their values gives

Note that the minus sign causes the magnification to be negative when the image is inverted. Ray tracing and the use of the thin lens equations produce consistent results. The thin lens equations give the most precise results, being limited only by the accuracy of the given information. Ray tracing is limited by the accuracy with which you can draw, but it is highly useful both conceptually and visually.

Real images, such as the one considered in the previous example, are formed by converging lenses whenever an object is farther from the lens than its focal length. This is true for movie projectors, cameras, and the eye. We shall refer to these as case 1 images. A case 1 image is formed when d o > f d o > f and f f is positive, as in Figure 25.34 (a). (A summary of the three cases or types of image formation appears at the end of this section.)

A different type of image is formed when an object, such as a person's face, is held close to a convex lens. The image is upright and larger than the object, as seen in Figure 25.34 (b), and so the lens is called a magnifier. If you slowly pull the magnifier away from the face, you will see that the magnification steadily increases until the image begins to blur. Pulling the magnifier even farther away produces an inverted image as seen in Figure 25.34 (a). The distance at which the image blurs, and beyond which it inverts, is the focal length of the lens. To use a convex lens as a magnifier, the object must be closer to the converging lens than its focal length. This is called a case 2 image. A case 2 image is formed when d o < f d o < f and f f is positive.

Figure 25.35 uses ray tracing to show how an image is formed when an object is held closer to a converging lens than its focal length. Rays coming from a common point on the object continue to diverge after passing through the lens, but all appear to originate from a point at the location of the image. The image is on the same side of the lens as the object and is farther away from the lens than the object. This image, like all case 2 images, cannot be projected and, hence, is called a virtual image . Light rays only appear to originate at a virtual image; they do not actually pass through that location in space. A screen placed at the location of a virtual image will receive only diffuse light from the object, not focused rays from the lens. Additionally, a screen placed on the opposite side of the lens will receive rays that are still diverging, and so no image will be projected on it. We can see the magnified image with our eyes, because the lens of the eye converges the rays into a real image projected on our retina. Finally, we note that a virtual image is upright and larger than the object, meaning that the magnification is positive and greater than 1.

Virtual Image

An image that is on the same side of the lens as the object and cannot be projected on a screen is called a virtual image.

Example 25.7

Image produced by a magnifying glass.

Suppose the book page in Figure 25.35 (a) is held 7.50 cm from a convex lens of focal length 10.0 cm, such as a typical magnifying glass might have. What magnification is produced?

We are given that d o = 7 . 50 cm d o = 7 . 50 cm and f = 10 . 0 cm f = 10 . 0 cm , so we have a situation where the object is placed closer to the lens than its focal length. We therefore expect to get a case 2 virtual image with a positive magnification that is greater than 1. Ray tracing produces an image like that shown in Figure 25.35 , but we will use the thin lens equations to get numerical solutions in this example.

To find the magnification m m , we try to use magnification equation, m = –d i / d o m = –d i / d o . We do not have a value for d i d i , so that we must first find the location of the image using lens equation. (The procedure is the same as followed in the preceding example, where d o d o and f f were known.) Rearranging the magnification equation to isolate d i d i gives

Entering known values, we obtain a value for 1/ d i 1/ d i :

Now the thin lens equation can be used to find the magnification m m , since both d i d i and d o d o are known. Entering their values gives

A number of results in this example are true of all case 2 images, as well as being consistent with Figure 25.35 . Magnification is indeed positive (as predicted), meaning the image is upright. The magnification is also greater than 1, meaning that the image is larger than the object—in this case, by a factor of 4. Note that the image distance is negative. This means the image is on the same side of the lens as the object. Thus the image cannot be projected and is virtual. (Negative values of d i d i occur for virtual images.) The image is farther from the lens than the object, since the image distance is greater in magnitude than the object distance. The location of the image is not obvious when you look through a magnifier. In fact, since the image is bigger than the object, you may think the image is closer than the object. But the image is farther away, a fact that is useful in correcting farsightedness, as we shall see in a later section.

A third type of image is formed by a diverging or concave lens. Try looking through eyeglasses meant to correct nearsightedness. (See Figure 25.36 .) You will see an image that is upright but smaller than the object. This means that the magnification is positive but less than 1. The ray diagram in Figure 25.37 shows that the image is on the same side of the lens as the object and, hence, cannot be projected—it is a virtual image. Note that the image is closer to the lens than the object. This is a case 3 image, formed for any object by a negative focal length or diverging lens.

Example 25.8

Image produced by a concave lens.

Suppose an object such as a book page is held 7.50 cm from a concave lens of focal length –10.0 cm. Such a lens could be used in eyeglasses to correct pronounced nearsightedness. What magnification is produced?

This example is identical to the preceding one, except that the focal length is negative for a concave or diverging lens. The method of solution is thus the same, but the results are different in important ways.

To find the magnification m m , we must first find the image distance d i d i using thin lens equation

or its alternative rearrangement

We are given that f = –10.0 cm f = –10.0 cm and d o = 7 . 50 cm d o = 7 . 50 cm . Entering these yields a value for 1/ d i 1/ d i :

Now the magnification equation can be used to find the magnification m m , since both d i d i and d o d o are known. Entering their values gives

A number of results in this example are true of all case 3 images, as well as being consistent with Figure 25.37 . Magnification is positive (as predicted), meaning the image is upright. The magnification is also less than 1, meaning the image is smaller than the object—in this case, a little over half its size. The image distance is negative, meaning the image is on the same side of the lens as the object. (The image is virtual.) The image is closer to the lens than the object, since the image distance is smaller in magnitude than the object distance. The location of the image is not obvious when you look through a concave lens. In fact, since the image is smaller than the object, you may think it is farther away. But the image is closer than the object, a fact that is useful in correcting nearsightedness, as we shall see in a later section.

Table 25.3 summarizes the three types of images formed by single thin lenses. These are referred to as case 1, 2, and 3 images. Convex (converging) lenses can form either real or virtual images (cases 1 and 2, respectively), whereas concave (diverging) lenses can form only virtual images (always case 3). Real images are always inverted, but they can be either larger or smaller than the object. For example, a slide projector forms an image larger than the slide, whereas a camera makes an image smaller than the object being photographed. Virtual images are always upright and cannot be projected. Virtual images are larger than the object only in case 2, where a convex lens is used. The virtual image produced by a concave lens is always smaller than the object—a case 3 image. We can see and photograph virtual images only by using an additional lens to form a real image.

Type Formed when Image type
Case 1 positive, real positive negative
Case 2 positive, virtual negative positive
Case 3 negative virtual negative positive

In Image Formation by Mirrors , we shall see that mirrors can form exactly the same types of images as lenses.

Take-Home Experiment: Concentrating Sunlight

Find several lenses and determine whether they are converging or diverging. In general those that are thicker near the edges are diverging and those that are thicker near the center are converging. On a bright sunny day take the converging lenses outside and try focusing the sunlight onto a piece of paper. Determine the focal lengths of the lenses. Be careful because the paper may start to burn, depending on the type of lens you have selected.

Problem-Solving Strategies for Lenses

Step 1. Examine the situation to determine that image formation by a lens is involved.

Step 2. Determine whether ray tracing, the thin lens equations, or both are to be employed. A sketch is very useful even if ray tracing is not specifically required by the problem. Write symbols and values on the sketch.

Step 3. Identify exactly what needs to be determined in the problem (identify the unknowns).

Step 4. Make alist of what is given or can be inferred from the problem as stated (identify the knowns). It is helpful to determine whether the situation involves a case 1, 2, or 3 image. While these are just names for types of images, they have certain characteristics (given in Table 25.3 ) that can be of great use in solving problems.

Step 5. If ray tracing is required, use the ray tracing rules listed near the beginning of this section.

Step 6. Most quantitative problems require the use of the thin lens equations. These are solved in the usual manner by substituting knowns and solving for unknowns. Several worked examples serve as guides.

Step 7. Check to see if the answer is reasonable: Does it make sense ? If you have identified the type of image (case 1, 2, or 3), you should assess whether your answer is consistent with the type of image, magnification, and so on.

Misconception Alert

We do not realize that light rays are coming from every part of the object, passing through every part of the lens, and all can be used to form the final image.

We generally feel the entire lens, or mirror, is needed to form an image. Actually, half a lens will form the same, though a fainter, image.

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Access for free at https://openstax.org/books/college-physics-2e/pages/1-introduction-to-science-and-the-realm-of-physics-physical-quantities-and-units
  • Authors: Paul Peter Urone, Roger Hinrichs
  • Publisher/website: OpenStax
  • Book title: College Physics 2e
  • Publication date: Jul 13, 2022
  • Location: Houston, Texas
  • Book URL: https://openstax.org/books/college-physics-2e/pages/1-introduction-to-science-and-the-realm-of-physics-physical-quantities-and-units
  • Section URL: https://openstax.org/books/college-physics-2e/pages/25-6-image-formation-by-lenses

© Jul 9, 2024 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

  • CBSE Class 12
  • CBSE Class 12 Physics Practical
  • To find the focal length of a concave lens, using a convex lens

To Find the Focal Length of a Concave Lens, Using a Convex Lens

The distance between the convex lens or a concave mirror and the focal point of a lens or mirror is called the focal length. Focal length can be positive or negative. A lens is a piece of transparent glass which concentrates or disperses light rays when passing through them by refraction. In this article, let us know how to find the focal length of a concave lens using a convex lens.

To find the focal length of a concave lens using a convex lens.

Materials Required

  • An optical bench with four upright
  • A convex lens with a lens focal length
  • A concave lens with a more focal length
  • Two lens holders
  • One thick and one thin optical needle
  • A knitting needle
  • A half-metre scale

We use the lens formula in this experiment to calculate the focal length of the concave lens:

  • f is the focal length of the concave lens L 1
  • u is the distance of I from the optical centre of the lens L 2
  • v is the distance of I’ from the optical centre of the lens L 2 .

From sign convention, the f obtained from the above formula will be negative as v > u and u – v is negative.

Ray Diagram

Focal length of a concave lens

To determine the rough focal length of the convex lens

  • Place the convex lens on the lens holder.
  • Now face the lens towards a distant tree or building.
  • Obtain the image either on the white wall or on a screen and keep moving the lens either forward or backwards till a sharp image is formed.
  • To determine the rough focal length of the lens, measure the distance between the lens and the screen.

To set the convex lens

  • Place the lens on the holder with fixed upright such that the upright is kept at 50 cm mark.
  • The lens should be placed in such a way that its surface is vertical and perpendicular to the length of the optical bench.
  • The upright should be kept in this position throughout.

To set the object needle

  • Place the thin optical needle which is the object needle O near-zero end of the upright which is moveable.
  • Place the object needle upright at a distance nearly 1.5 times the focal length of the lens.
  • The tip of the needle should be horizontal to the optical centre of the lens.
  • Note the position of the index mark below the object needle upright.
  • To see an inverted and enlarged image of the object needle which is in the middle of the lens, close the left eye and see with a right eye open.
  • On the other end of the optical bench, place the image needle on the fourth upright.
  • The tip of the image needle should be in line with the image that is seen with the right eye.
  • To see the parallax, move the eye towards the right. The image needle and object needle are no longer in line.
  • Remove the parallax tip to tip.
  • Note the position of the index mark at the base of the image needle upright.
  • Record the position of the index marks.
  • Now place the concave lens holder on the I side of the convex lens.
  • The upright and convex lens should be placed at a distance from each other.
  • The concave lens should be placed such that it coincides with the principal axes.

To set the image needle at I’

  • Repeat steps 4 and 5.

To get more observations

  • Repeat the experiment by moving the object needle towards the lens by 2cm.
  • Repeat the experiment by moving the object needle away from the lens by 2cm.
  • Record all the observations.

Observations

The rough focal length of a convex lens = ……….

The actual length of the knitting needle, x = ………

Observed distance between the concave lens and image = ……..

Needle when knitting needle is placed between them, y = ……..

Index correction for u as well as v, x – y = ……..

Table for u, v and f

in cm
at O in cm at O in cm in cm in cm
1. f =
2. f =
3. f =

Calculations

  • To find observed u by finding the difference of position of L 2 and I.
  • To find observed v by finding the difference of position of L 2 and I’.
  • Corrected values of u and v are obtained by applying index correction.
  • Calculate \(\begin{array}{l}f=\frac{uv}{u-v}\end{array} \) .
  • Finding the mean of f

The focal length of the given concave lens = …….. cm.

Precautions

  • The lens must be clean.
  • The focal length of the convex lens should be lesser than the concave lens.
  • For u and v index correction should be applied.
  • To obtain a real and inverted image, the needle should be kept at a certain distance.
  • To avoid parallax, a distance of at least 30 cm should be maintained between the tip of the needle and eye.

Sources of Error

  • Vertical uprights might not be used.
  • The removal of parallax might not be perfect.

Viva Questions

Q1. What is a spherical lens?

Ans: Spherical lens is defined as a lens that is part of a sphere and has a surface that is spherical.

Q2. What type of lens is present in the human eye?

Ans: Convex lens is present in human eye.

Q3. What are the factors affecting the power of lens?

Ans: Following are the factors affecting the power of lens:

  • Refractive index of the material used in the lens.
  • The change in medium.
  • The radius of curvature.
  • The wavelength of the light.
  • The thickness of the lens.

Q4. What is the focal length of a lens?

Ans: The focal length of a lens is defined as the distance between the optical centre and the principal focus of the lens.

Q5. What are the use of lens?

Ans: Lenses are used in spectacles, microscopes, in optical instruments, and in telescopes.

Stay tuned with BYJU’S to learn more about other Physics-related experiments.

Quiz Image

Put your understanding of this concept to test by answering a few MCQs. Click ‘Start Quiz’ to begin!

Select the correct answer and click on the “Finish” button Check your score and answers at the end of the quiz

Visit BYJU’S for all Physics related queries and study materials

Your result is as below

Request OTP on Voice Call

PHYSICS Related Links

Leave a Comment Cancel reply

Your Mobile number and Email id will not be published. Required fields are marked *

Post My Comment

convex lens focal length experiment

thank you for this support

convex lens focal length experiment

Register with BYJU'S & Download Free PDFs

Register with byju's & watch live videos.

Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Determining the focal length of a convex lens (no-parallax method)

This is about the experiment where we determine the focal length of a convex lens using the graphical method,(the no parallax method). In the situation where the image moves opposite to the direction that I move my eye,do I have to move the observation pin towards my eye or away from my eye?

  • experimental-physics

Qmechanic's user avatar

  • $\begingroup$ What do you mean by “graphical method”? $\endgroup$ –  Farcher Commented Apr 29, 2019 at 17:40
  • $\begingroup$ @Farcher finding the focal length by drawing a graph for different u and v values we obtain experimentally $\endgroup$ –  AfiJaabb Commented Apr 30, 2019 at 11:19

In the method of no-parallax you need to have the image of the object pin in the same position (position of no parallax) as the observation pin.

I hope that the diagram below is self explanatory?

enter image description here

  • $\begingroup$ So I should move the pin towards my eye right? $\endgroup$ –  AfiJaabb Commented May 2, 2019 at 8:51

Your Answer

Sign up or log in, post as a guest.

Required, but never shown

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy .

Not the answer you're looking for? Browse other questions tagged optics experimental-physics lenses or ask your own question .

  • Featured on Meta
  • User activation: Learnings and opportunities
  • Preventing unauthorized automated access to the network

Hot Network Questions

  • Consequences of registering a PhD at german university?
  • "First et al.", many authors with same surname, and IEEE citations
  • How can I add cache information to InboundPathProcessor?
  • corresponding author not as the last author in physics or engineering
  • Combustion gas of gas generator right through nozzle?
  • Should coffee machines be placed at the region's boundary?
  • Returning to the US for 2 weeks after a short stay around 6 months prior with an ESTA but a poor entry interview - worried about visiting again
  • Does the collapse axiom predict non-physical states in the case of measurement of continuous-spectrum quantities?
  • CC BY-SA 2.5 License marked as denied license in the FOOSA tool after upgrading to React Native 0.74 version
  • PCB design references and roadmap
  • Smallest prime q such that concatenation (p+q)"q is a prime
  • A continuous analogue of the notion of Hilbert basis
  • Sent money to rent an apartment, landlord delaying refund with excuses. Is this a scam?
  • Closed unit disk is connected
  • Numerical integration of ODEs: Why does higher accuracy and precision not lead to convergence?
  • Can noun phrases have only one word?
  • Which law(s) bans medical exams without a prescription?
  • How to assign a definition locally?
  • What is a “bearded” oyster?
  • Why is it surprising that the CMB is so homogeneous?
  • Play the Final Fantasy Prelude
  • My one-liner 'delete old files' command finds the right files but will not delete them
  • Is it ok if I was wearing lip balm and my bow touched my lips by accident and then that part of the bow touched the wood on my viola?
  • How am I supposed to solder this tiny component with pads UNDER it?

convex lens focal length experiment

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 18 September 2024

Optical fibre based artificial compound eyes for direct static imaging and ultrafast motion detection

  • Heng Jiang   ORCID: orcid.org/0000-0001-9958-9350 1 , 2 ,
  • Chi Chung Tsoi 1 , 2 ,
  • Weixing Yu   ORCID: orcid.org/0000-0002-3216-526X 3 ,
  • Mengchao Ma   ORCID: orcid.org/0000-0001-9682-7499 4 ,
  • Mingjie Li   ORCID: orcid.org/0000-0003-0417-5089 1 ,
  • Zuankai Wang   ORCID: orcid.org/0000-0002-3510-1122 5 &
  • Xuming Zhang   ORCID: orcid.org/0000-0002-9326-5547 1 , 2 , 6  

Light: Science & Applications volume  13 , Article number:  256 ( 2024 ) Cite this article

Metrics details

  • Fibre optics and optical communications
  • Imaging and sensing
  • Integrated optics
  • Optical sensors
  • Photonic devices

Natural selection has driven arthropods to evolve fantastic natural compound eyes (NCEs) with a unique anatomical structure, providing a promising blueprint for artificial compound eyes (ACEs) to achieve static and dynamic perceptions in complex environments. Specifically, each NCE utilises an array of ommatidia, the imaging units, distributed on a curved surface to enable abundant merits. This has inspired the development of many ACEs using various microlens arrays, but the reported ACEs have limited performances in static imaging and motion detection. Particularly, it is challenging to mimic the apposition modality to effectively transmit light rays collected by many microlenses on a curved surface to a flat imaging sensor chip while preserving their spatial relationships without interference. In this study, we integrate 271 lensed polymer optical fibres into a dome-like structure to faithfully mimic the structure of NCE. Our ACE has several parameters comparable to the NCEs: 271 ommatidia versus 272 for bark beetles, and 180 o field of view (FOV) versus 150–180 o FOV for most arthropods. In addition, our ACE outperforms the typical NCEs by ~100 times in dynamic response: 31.3 kHz versus 205 Hz for Glossina morsitans . Compared with other reported ACEs, our ACE enables real-time, 180 o panoramic direct imaging and depth estimation within its nearly infinite depth of field. Moreover, our ACE can respond to an angular motion up to 5.6×10 6  deg/s with the ability to identify translation and rotation, making it suitable for applications to capture high-speed objects, such as surveillance, unmanned aerial/ground vehicles, and virtual reality.

Similar content being viewed by others

convex lens focal length experiment

Research on key technology of cooled infrared bionic compound eye camera based on small lens array

convex lens focal length experiment

Measuring compound eye optics with microscope and microCT images

convex lens focal length experiment

An aquatic-vision-inspired camera based on a monocentric lens and a silicon nanorod photodiode array

Introduction.

Natural compound eyes (NCEs) were first investigated by Robert Hooke in 1665 after he observed orderly arranged pearls in the cornea of a grey drone fly (Fig. 1a ) 1 . This research increased interest in NCEs 2 . Later, Sigmund Exner proposed the ommatidium as the basic unit of a compound eye. In each ommatidium of the NCE, light is first collected by a corneal facet lens (i.e., a pearl ) at a certain acceptance angle and then transmitted by a crystalline cone and rhabdom (i.e., a light guide) to photoreceptor cells 3 (Fig. 1b ). Ommatidia are further innervated by axon bundles that execute synaptic connections in lamina cartridges 4 . After the primary signal is processed in deeper neural centres, such as the medulla and lobula (Fig. 1c ) 5 , information is finally transmitted to central brain regions. Unlike the monocular eyes of vertebrates, NCEs utilise ommatidia arrayed on a curved surface (Fig. 1c ). Furthermore, NCEs have many advantages, such as a panoramic field of view (FOV), good depth perception, negligible aberration, and fast motion tracking capability 6 , 7 , 8 .

figure 1

a The fly Choerades fimbriata has natural compound eyes (NCEs) for imaging; photograph courtesy of Mr. Thorben Danke of Sagaoptics. The inset shows compactly arranged corneal facet lenses in the NCEs. b In a natural ommatidium, the facet lens with a focal length f collects light at a specific acceptance angle Δ φ , the crystalline cone ensures light convergence, the rhabdom (diameter d ) transmits light through the inner structure, and the photoreceptor cell records the light information. c An NCE consists of numerous natural ommatidia, which are surrounded by pigment cells to prevent crosstalk. Here, the interommatidial angle ∆Φ = D / R , where D and R denote the arc distance of adjacent ommatidia and the local radius of curvature, respectively. d Comparison of different compound eyes in the functions of static panoramic imaging and dynamic motion detection. The 1st generation ACEs primarily focused on the fabrication of ACE microlenses, lacking the ability of static imaging or dynamic detection. In the 2nd generation ACEs, none of these ACEs could realise real-time panoramic direct imaging and dynamic motion detection simultaneously, as what the NCEs can do. In contrast, our ACEcam is comparable to the NCEs in aspects of 180 o field of view and static imaging, and surpasses the NCEs in ultrafast motion detection. e An artificial ommatidium closely resembles a natural ommatidium by using a microlens to mimic the facet lens and the crystalline cone, an optical fibre core to mimic the rhabdom, an optical fibre cladding to mimic the pigment cells, an imaging lens to mimic synaptic units to focus each optical fibre onto an individual photodetector, and a photodetector in the flat imaging sensor chip to mimic the photoreceptor cell. f An artificial compound eye consists of numerous artificial ommatidia, with a flat imaging sensor chip mimicking the deeper neural centres (medulla and lobula), where signals are pre-processed. The signals are then transmitted to a computer for further analysis

NCEs have inspired the development of artificial compound eyes (ACEs) based on planar microlens arrays 9 , curved microlens arrays 10 , 11 , 12 , 13 , 14 , and metasurfaces 15 . In ACEs, the photodetector cells are arranged on either curved 11 , 12 , 13 or planar 16 surfaces. Nevertheless, most of the reported ACEs do not faithfully replicate the NCE structures and therefore lack some of their advantages. Since ACE designs using planar microlens arrays or metasurfaces usually have limited FOVs 17 , they are therefore not investigated in this study. ACEs with curved microlens arrays are briefly compared in Table 1 and elaborated in Supplementary Table S1 . Specifically, the 1st generation ACEs primarily focused on the fabrication of ACE microlenses, lacking the ability to achieve panoramic imaging and dynamic detection 14 , 18 (Fig. 1d ). In the 2nd generation ACEs, some showed the capability of panoramic imaging, but they needed post-processing to retrieve the images, such as single-pixel imaging technique 19 , scanning method with mapping algorithm 12 , 13 , and backpropagation neural networks 20 . Despite recent efforts on real-time direct imaging, those ACEs still struggled with quantitative distance estimation 16 . Some other ACEs could realise dynamic motion detection but at common speeds 11 . Nevertheless, none of those ACEs can match the NCEs in achieving real-time panoramic direct imaging and dynamic motion detection simultaneously (Fig. 1d , Table 1 , and Supplementary Table S1 ). Currently, the main challenge with curved microlens array-based ACEs is how to transmit the light rays collected by many microlenses on a curved surface to a flat imaging sensor (e.g., a CMOS chip) while maintaining their spatial relationships. Optical waveguides could address this challenge, as presented in one recent study 16 that filled silicone elastomer into the hollow pipelines in a 3D-printed black substrate. Nevertheless, the waveguiding effect was unmet due to the layered texture and the black colour in the 3D-printed pipeline inner surfaces. Additionally, it did not mention the optical design criteria that are essential for optical waveguides in ACEs.

Luckily, optical fibres, which majorly are divided into glass ones and plastic ones, are promising to solve this challenge since they can capture light at certain angles and transmit light over a long distance with very low loss 21 . They have been extensively applied in telecommunications 22 , sensors 23 , light guides 24 , and imaging systems 25 . Nevertheless, not a completed ACE system using optical fibres was once proposed to meet both 180 o real-time direct static imaging within the nearly infinite depth of field and dynamic perception with fast angular responses. This is because silica optical fibres are stiff and break easily at large bending angles, they are not suitable for ACEs 21 . Instead, the acceptance angle of plastic optical fibres is too high, lowering the angular resolution.

In this work, we add a microlens to one end of a plastic optical fibre (Fig. 2a ) to mimic the structure and functions of a natural ommatidium (Fig. 1e ). The lensed ends of 271 fibres (versus 272 ommatidia for bark beetles 26 ) are incorporated onto a curved surface (Figs. 1 f, 2 b) and used to assemble a biomimetic ACE as a panoramic camera (called ACEcam hereafter, Fig. 2c ). The ACEcam faithfully mimics the structure of apposition ACEs and excels in both static and dynamic perceptions, thus finding niche applications in diverse imaging and dynamic detection domains.

figure 2

a Scanning electron microscopy (SEM) image of the conical microlens on an optical fibre. b Top view of the ACEcam light receiving head that uses a 3D-printed dome to host 271 fibre ends. c Photograph of an assembled ACEcam. d Concept of image formation. Using a ‘ + ‘ line-art pattern as the object (top panel), some fibres receive light from the object (second panel), and this pattern is transmitted from the lens end to the other end of the fibre (third panel). An imaging lens is employed to project the light from the fibre ends to a flat imaging sensor chip (fourth panel top), which is then converted into the final digital image (bottom panel). e Fabrication process flow of conical microlens optical fibres. A template with an array of conical grooves is fabricated by an ultrahigh precision 3D printing method (top panel), then the first PDMS mould is made to obtain convex cones (second panel). Physical vapour deposition and electroplating are then utilised to coat Cu layers on the first PDMS mould to smooth the rough surface of the convex cones and to round the sharp tip of the convex cones (third panel). After the second pattern transfer to get the second PDMS mould, optical adhesive NOA81 is dropped (0.15 μL/drop) into each conical groove by using a microsyringe (fourth panel). Next, an optical fibre buncher is mounted on the second PDMS mould so that optical fibres are well aligned with and submerged into the NOA81 microlenses wells. After UV illumination, each optical fibre end is mounted with a conical microlens, and finally, all fibres are peeled off (bottom panel)

Assembled device

In the assembly, 271 lensed plastic optical fibres (Fig. 2a , details of fabrication will be explained below) are attached to a 3D-printed perforated dome (diameter 14 mm, Materials and methods: Fabrication of artificial compound eyes for a full-vision camera, Fig. 5a, b ) so that all the lensed ends of the fibres are on the dome surface (Fig. 2b ), while the bare ends of the fibres are placed into a perforated planar buncher (Fig. 5c ). Light leaving the bare fibre ends is projected onto a flat imaging sensor via an imaging lens (Fig. 1f ). The dome, the buncher, the imaging lens and a flat imaging sensor chip are hosted in a screwed hollow tube (Fig. 5d ). In the assembly, the 3D-printed dome has the black colour so as to absorb the leaked or stray light, functioning the same as the pigment cells in the NCEs to prevent crosstalk. The lensed plastic fibres confine the collected light, preventing crosstalk and the associated ghost images, and the buncher maintains the relative positions of the microlenses on the dome. This setup enables the light collected at the curved surface to be transmitted to a flat image sensor, thus faithfully replicating the ommatidia in an NCE.

In the image formation process, the light emitted by the object is captured at different angles by the microlenses on the dome (Fig. 2d ). At the bare fibre ends, the planar images are projected onto the flat imaging sensor chip. Then, the final images are obtained for digital image processing. The imaging lens prevents contact of the bare fibre ends with the vulnerable image chip surface.

In the NCE, if the ommatidium acceptance angle is ∆ φ  =  d / f and the interommatidial angle is ∆Φ = D / R (Fig. 1b, c ), ∆ φ should be slightly larger than ∆Φ to ensure that no angular information is lost while reducing redundant angular overlap; here, d , f , D and R denote the rhabdom diameter, the focal length of the facet lens, the arc distance of adjacent ommatidia and the local radius of curvature, respectively 27 . Similarly, in ACEcam, ∆ φ should be only slightly larger than ∆Φ = 12.2 o (Fig. 1f ). Otherwise, the receiving areas of adjacent fibres would have a large overlap, lowering the angular resolution (Supplementary Fig. S1a ).

Although plastic optical fibres are a good choice due to their flexibility and durability, the light acceptance angle of a plastic optical fibre is usually large (e.g., 60 o , Materials and methods: Acceptance angle of the bare plastic optical fibre, Fig. 6a, b ). Therefore, the acceptance angle of the optical fibre should be reduced by properly engineering the fibre tip (Supplementary Fig. S1b ). Here, we add a microlens with a conical shape onto the distal end of the plastic optical fibre (Fig. 2a ). Although spherical microlenses are easy to fabricate via surface tension and are thus widely used, the use of these microlenses often increases the acceptance angle (Materials and methods: Acceptance angle of the plastic optical fibre with a spherical microlens, Figs. 6 c, d, 7 a). In contrast, a conical microlens could reduce the acceptance angle (Fig. 7b ); however, these microlenses with optical smoothness are more difficult to fabricate due to the unique shape and low melting point of plastic optical fibres. Our analysis and simulations show that a half-apex angle of θ  = 35 o is the best choice for the conical microlens, reducing the acceptance angle of the fibre from 60 o to 45 o (Materials and methods: Acceptance angle of the plastic optical fibre capped with a conical microlens - Choice of shape and size of microlenses, Fig. 6e–h ). Moreover, the sharp tip of the conical microlens is rounded during the fabrication process. This rounded tip is beneficial since it ensures that light information in the central angular range is not lost (Materials and methods: Choice of shape and size of microlenses 6, Fig. 8 ).

The conical microlens plastic optical fibres are fabricated in batches by a sequence of 3D printing, electroplating and two moulding processes (Fig. 2e , Materials and methods: Fabrication of a conical microlens on an optical fibre and Fig. 9 ), which is a novel approach to add a microlens onto the distal end of an optical fibre. Approximately 200 conical-microlens optical fibres are obtained in each batch process, and each conical microlens has a smooth surface and naturally a rounded tip (Fig. 2a ). After the assembly process, the fabricated ACEcam is ready for experiments.

Static imaging

Previous studies on ACEs focused on static imaging (e.g., point-source tracking and panoramic imaging 12 , 13 ) or dynamic motion extraction 11 . Nevertheless, static imaging usually requires a complex scanning system, which considerably reduces the imaging rate 12 , whereas dynamic motion extraction often obtains mosaic results due to the discrete distribution of photodetectors on the curved surface 11 . The proposed ACEcam can perform both static imaging and dynamic motion detection and has several advantages. First, this design has an exceptionally wide FOV (i.e., 180 o ). For experimental verification, laser spots are illuminated from 90 o to 0 o at steps of 22.5 o in both the x and y directions (see Fig. 3a for the combined result and Supplementary Fig. S2 for the individual images). Over the whole 180 o FOV, the images are highly uniform in size, brightness and angular position. This 180 o FOV helps ACEcam to exceed most ACEs to capture wider light information, and thus to be more suitable in various applications such as surveillance and unmanned drones.

figure 3

a Combined image of a laser spot from nine angles (from −90 o to 90 o in both the x and y directions at a step of 22.5 o ). b Image of the logo of The Hong Kong Polytechnic University. c , d Depth estimation using the linear relationship between the point spread parameter σ and the reciprocal of the object distance u −1 . In ( c ), example images at four different distances u 1  = 3 mm, u 2  = 5 mm, u 2  = 7 mm, and u 4  = 9 mm are shown in the dotted box. In the image acquired at each distance, the grey values along four parallel lines (shown here in pink) in the x direction are analysed to calculate the mean value and the errors shown in ( d ). D L is the distance between a point on the pink line and the upper boundary of an image. In ( d ), the inset shows the relative grey value distribution along one sample pink line in c . A low error range signifies a high reliability of ACEcam ’ s depth estimation. e , Images of the letters ‘HK’ captured at three different polar angles relative to the centre of the camera: −50 o (top), 0 o (centre) and 50 o (bottom). f Schematic of an experimental setup to verify the nearly infinite depth of field of ACEcam. Objects A (circle) and B (triangle) are placed at angular positions of −40 o and 40 o . g Images of the circle and triangle patterns when the distance of the circle image is fixed at D A  = 2 mm and the distance of the triangle image varies from D B  = 2, 8 to 14 mm

Secondly, ACEcam also supports real-time panoramic direct imaging without distortions. Different test patterns, such as the logo of our university and the letters ‘HK’, can be imaged clearly with ACEcam (Fig. 3b, e and Supplementary Fig. S3 ). Unlike the ACEs developed in prior studies, which required redundant postprocessing approaches 12 , 13 , 19 , 20 , ACEcam enables direct imaging, similar to the capabilities of real NCEs. In addition, object distances can be estimated (i.e., depth estimation). In these experiments, a checkerboard pattern is set at different object distances from the camera (red lines in Fig. 3c ), and the grey values at different distances along the vertical edge direction (pink lines in Fig. 3c ) are analysed to determine the relationship between the point spread parameter σ and the reciprocal of the object distance u −1 (Fig. 3d ), which should theoretically follow a linear relationship 28 , 29 (Supplementary Section 2 ). Given the point spread parameter σ based on a measured image, the distance u can be obtained directly from the fitting expression. The absolute value of this slope is defined as the critical parameter m of the camera in this work, representing how the imaging quality of the camera is affected by the object distance. Here, the ACEcam is determined to have a value of m  = 17.42 (Fig. 3d , Supplementary Section 2 ). If other images with unknown distances are captured, their σ values can be calculated, and then their distances u can be determined using the linear curve between σ and u −1 . Next, the letters ‘HK’ are placed at three different angular positions: −50 o (left), 0 o (centre) and 50 o (right). No image distortions are observed (Fig. 3e ), showing the good panoramic imaging performance of ACEcam. In comparison with other ACEs that attain panoramic imaging through redundant post-processing methods 12 , 13 , 19 , 20 , the capabilities of real-time direct imaging and object distance estimation of our ACEcam make it versatile across a broader range of applications. For instance, it can capture images and measure distances among moving objects in reality.

The third merit is the nearly infinite depth of field. To verify this property, two objects, a circle and a triangle, are placed at two widely separated angles and different distances (Fig. 3f ). When the distances of both objects are the same, the image sizes are similar (Fig. 3g and Supplementary Video 1 ). When the circle image is kept static and the triangle image is moved away from ACEcam, the circle image size remains unchanged, but the triangle image size decreases. The focus is always retained. The nearly infinite depth of field of ACEcam is attributed to the image formation process, as each fibre captures all the light information within its acceptance angle, regardless of the object distance. Compared with the ACEs without infinite depth of field 16 , this characteristic enables the ACEcam to perform better in certain fields, including applications in virtual reality and augmented reality, contributing to an enhanced sense of realism in augmented reality experiences.

Dynamic detection

Real-time perception ensures that the ACEcam is suitable not only for static imaging but also for dynamic detection. The fourth merit is that ACEcam can also be applied to determine optical flow according to visual translation and rotation signals. Here, the Lucas-Kanade method is adopted as the data processing algorithm due to its high efficiency in computing two-dimensional optical flow vectors based on images 30 , 31 (Supplementary Section 3 ). In these experiments, when the ACEcam is placed 10 mm in front of a checkerboard pattern and moved vertically (Supplementary Fig. S4a ), the computed optical flow vectors have uniform direction and length, illustrating the reliability and stability of ACEcam in dynamic motion detection (Fig. 4a ). Since the checkerboard pattern has alternating dark and bright regions, the direction and length of the vector represent the direction and velocity of the motion of a bright region. Moreover, when the ACEcam is rotated (Supplementary Fig. S4b ), the rotation centre can be easily identified (dark dot in Fig. 4b ) since the length of the optical flow vector has a linear relationship with the distance from the rotation centre. This motion detection capability of ACEcam may facilitate various applications, such as kinestate tracking and motion state control in robots and unmanned aerial vehicles.

figure 4

a Optical flow as the ACEcam is translated in front of a checkerboard pattern at a distance of 10 mm. Here green dots represent the ommatidia illuminated by bright squares of the checkerboard, and the direction and length of the vector denote the motion direction and velocity of a bright square. b Optical flow as the ACEcam is rotated, with the dark spot indicating the calculated rotation centre. c Experimental setup to generate very high angular velocities for the dynamic response measurement. Five LEDs are evenly spaced along 180 o and lit up successively with a delay time ∆ t whose minimum value is equal to the response time of the photodiode Δ t dec , and five photodiodes are employed to record the light emitted by the corresponding LEDs. d , e Response signals of the photodiodes (upper panel) when the LEDs are driven by square waves (lower panel) of f flicker  = 240 Hz in ( d ) and 31.3 kHz in ( e ). f Signal transmission pathway in the natural ommatidium. g Signal transmission pathway in the artificial ommatidium

As the fifth merit, ACEcam shows ultrafast angular motion perception capability. To demonstrate a simple object with a fast angular motion, 5 LEDs are equally spaced over 180 o and sequentially activated by square waves with a period of Δ t (Fig. 4c ). When a CMOS chip (OV7725, OmniVision Technologies Inc., 30 fps) is used as the photodetection unit, the frame rate is 30 Hz, and the angular perception is limited to 5.4 × 10 3  deg/s (Materials and methods: Using a CMOS chip as the photodetection unit and Supplementary Video 2 ). Similarly, three ‘T’ objects are equally spaced over 180 o and sequentially activated by square waves with a period of Δ t (Supplementary Fig. S5 ) to further demonstrate the angular motion perception capability (Supplementary Video 3 ). Due to the frame rate limitations of the chip, a high flicker frequency f flicker may lead to missing the recording of some objects. To further investigate ACEcam’s angular motion perception capability, a photodiode array with 5 electromagnetically shielded photodiodes (ElecFans) is used (Fig. 4c ), and the light produced by flickering LEDs is recorded with these photodiodes. When f flicker  = 24 Hz is close to human flicker fusion frequency (FFF) 32 , 33 , the photodiodes get smooth response curves (Supplementary Fig. S6 ). And, when f flicker  = 240 Hz is close to the FFF of the fly Glossina morsitans (FFF = ~205 Hz 32 , 34 ), the photodiode signals remain smooth (Fig. 4d ). To test the limit, the LEDs are set at f flicker  = 31.3 kHz, which matches the response time of the photodiodes used in this experiment (i.e., 31.9 μs) and is ~100 times higher than the typical FFF of NCEs, the detected electrical signals of photodiodes change from a square wave to a spike wave (Fig. 4e ). Equivalently, the ACEcam can respond to an angular velocity of up to 5.6 × 10 6  deg/s (Materials and methods: Using a photodiode array as the photodetection unit and Supplementary Video 4 ), which can be further improved by several orders using faster photodiode arrays (e.g., 28 Gbit/s, PD20V4, Albis). This property broadens the application range to high-speed objects, such as aeroplanes and even spacecraft, a capability that is impossible for common ACEs.

The compelling reason behind ACEcam’s remarkable ultrafast angular motion perception lies in its emulation and surpassing of NCEs’ signal transmission. In contrast to spiking neurons, which exhibit an “all-or-none” behaviour due to their refractory period, nonspiking graded neurons in insects have multilevel responses and temporal summation characteristics when stimulated sequentially (Supplementary Fig. S7 ) 32 , 35 . This feature allows for a significant increase in the signal transmission rate between the retina and lamina neurons from approximately 300 bit/s (spiking neurons) to 1650 bit/s (nonspiking graded neurons) 36 . This feature enhances the performance of NCE visual systems. In our ACEcam, we have simplified the signal conversion process from that of natural ommatidium, which involves multiple steps (e.g., photosignals to histamine biological signals, then to electric signals; Fig. 4f ); specifically, in the artificial ommatidium, only one signal transduction step is needed (i.e., photosignals to electric signals; Fig. 4g ). Thus, the theoretical limit of the signal transmission rate is determined only by the frequency response of individual photodetection units (e.g., photodiodes), which could reach up to 50 Gbit/s, 7 orders of magnitude higher than that in the natural ommatidium. The distinctive anatomical structure empowers the ACEcam with significant potential for ultrafast angular motion perception, surpassing not only the existing ACEs but also outperforming the NCEs, and thus this characteristic serves as a blueprint for advancing ACE development.

In the proposed ACEcam, lensed plastic optical fibres are used as artificial ommatidia. By adding a conical microlens to the distal end of the fibre, the plastic optical fibre mimics the function of an ommatidium, collecting and transmitting light to the sensing unit. A bundle of lensed plastic optical fibres evenly distributed on a hemispherical surface is assembled to mimic NCEs, and the proposed ACEcam demonstrates excellent static imaging and dynamic motion detection capabilities. For example, a wide field of view (i.e., 180 o ) enables the ACEcam to outperform the majority of ACEs, making it particularly well-suited for applications in areas such as surveillance; the real-time panoramic direct imaging without distortions eliminates the need for redundant post-processing methods, rendering the ACEcam more suitable for applications such as imaging and distance measurement among moving objects in real-world scenarios; a nearly infinite depth of field can enhance the sense of realism in augmented reality experiences, making it more niche in virtual reality and augmented reality compared to those lacking this property; translational and rotational motion perception capabilities and ultrafast angular motion detection (5.6 × 10 6  deg/s at maximum) provide the ACEcam with the potential for kinestate tracking and motion state control across various machines, from common cars to high-speed aeroplanes and even spacecraft. The amalgamation of these merits also positions the ACEcam for niche applications. For instance, the 180 o field of view and ultrafast angular motion detection make ACEcam suitable for integration into obstacle avoidance systems for high-speed unmanned aerial vehicles. This capability reduces the need for multiple obstacle avoidance lenses, consequently eliminating excess weight and size. The 180 o field of view and little size of ACEcam also let it suitable for endoscopy. Although the image resolution and size are limited by the number of artificial ommatidia, this ACEcam provides an overview of the imaging space, which is useful for complementing existing camera systems that observe regions of interest with high resolution to obtain fine details.

In future research, we plan to integrate apertures at the distal end of optical fibres. By considering the relationship among diameter, thickness, and distance of these apertures from the optical fibre, a further reduction in the acceptance angle could be anticipated. Besides, there are two approaches to improve the resolution and the miniaturisation of the ACE camera. (1) Narrower optical fibres: Plastic optical fibres with smaller diameters (e.g., from currently 250 μm to 25 μm) can be used to reduce the space occupied by each fibre and thus increase the number of plastic optical fibres. Additionally, advanced fabrication technologies and devices with better critical dimension capabilities (e.g., nanoArch S130, Boston Micro Fabrication Nano Material Technology) can be employed to create domes and bunchers with smaller dimensions and more through-holes with a reduced diameter. This would allow for a greater number of narrow plastic optical fibres, thereby enhancing the image resolution. Besides, the reduced diameters of key components would contribute to further miniaturisation of the ACEcam. (2) Optical fibre bundles: Plastic optical fibres can be replaced with imaging optical fibre bundles to mimic optical and neural superposition in NCEs. Since each imaging optical fibre bundle contains thousands of individual fibres, the resolution can be significantly increased if the relationship between the microlens and the imaging optical fibre bundles is well analysed, similar to the analysis presented in this article. More compact optical fibres in bundles will also aid in further miniaturising the ACEcam. Moreover, the combination of optofluidic lenses and ACEcam offers the potential to harness both benefits found in both arthropods’ compound eyes and vertebrate monocular eyes.

Materials and methods

Fabrication of artificial compound eyes for a full-vision camera.

The 3D-printed components are illustrated in Fig. 5 . First, a dome (Fig. 5a, b , external radius R  = 7.0 mm, open angle 180 o ) and a buncher (Fig. 5c ) were 3D-printed by projection micro stereolithography (microArch® S140, BMF Precision Tech Inc.). To allow for positioning each of the fibres, 271 through-holes with a diameter of 280 μm were evenly distributed in the dome (the number of through-holes from the centre to the outermost ring increases evenly, with values of 1, 6, 12, 18, 24, 30, 36, 42, 48, and 54, ensuring a uniform distribution) and another 271 through-holes were evenly distributed in the buncher. Then, 271 conical microlens optical fibres (external diameter d  = 250 μm) were manually threaded through the holes in both the dome and the buncher while maintaining the relative positions of the optical fibres. The microlens ends of the optical fibres were placed on the curved surface of the dome, and the other ends of the optical fibres were cut to the same length after being passed through the buncher so that the fibre ends formed a flat surface. Next, the dome and the buncher were placed in a screwed hollow tube (Fig. 5d ). This hollow tube was connected to another tube containing an imaging lens (standard M12 camera lens) and a flat imaging sensor chip (OV7725, OmniVision Technologies Inc.) (Figs. 1 f, 2 c). With this setup, the light rays received by the microlenses on the curved surface (i.e., the surface of the dome) could be transmitted to the planar surface (i.e., the surface of the buncher) and projected through the imaging lens to the flat surface of the imaging sensor chip.

figure 5

a Photograph of the perforated dome, held by tweezers. b Design of the perforated dome. c Photograph of the perforated buncher. d Design of the screwed hollow tube that will be used to hold the dome and the buncher

Acceptance angle of the bare plastic optical fibre

A plastic optical fibre usually has a core (polymethyl methacrylate, PMMA) and a cladding (poly tetra fluoroethylene, PTFE). Based on the principle of optical path reversibility, the acceptance angle of the optical fibre is equal to the divergence angle when the light exits the fibre end. Therefore, we can study the divergence angles of optical fibres in different scenarios (Fig. 6 ).

figure 6

Red paths represent the light reflected from the upper core/cladding interface of the optical fibre, and green paths represent the light reflected from the lower core/cladding interface of the optical fibre. a , b In the bare multimode optical fibre, the light can be reflected from the upper ( a ) or lower ( b ) core/cladding interface. c , d In the optical fibre capped with a spherical microlens, the light can be reflected from the upper ( c ) or lower ( d ) core/cladding interface. e – h In the optical fibre capped with a conical microlens, the light has various paths. In one case, the light experiences no reflection in the conical surface after being reflected from the upper ( e ) or lower ( f ) core/cladding interface. In the other case, the light experiences one hop (i.e., reflection) in the conical surface after being reflected from the upper ( g ) or lower ( h ) core/cladding interface

In the simplest case of a bare multimode optical fibre, the light is reflected from the interface of the core and the cladding (Fig. 6 ). Let n 0 , n 1 , and n 2 represent the refractive index of the air, core and cladding, respectively, and r denotes the core radius.

Specifically, when ∠ 1 reaches its minimum (Fig. 6a, b ), it follows that

Similarly, when ∠ 2 reaches its maximum, it follows that

Based on the law of refraction, at this time, the divergence angle ∠ 3 reaches its maximum absolute value when

Here, \({n}_{0}\,\sin \angle 3\) is also called the numerical aperture (NA). Typically, a plastic optical fibre has an n 0 value of 1, with \(\sqrt{{n}_{1}^{2}-{n}_{2}^{2}}=0.5\) , and thus, ∠ 3 = 30 o .

Similarly, when light is reflected from the opposite side of the interface between the core and cladding (Fig. 6b ), the maximum absolute value of ∠ 3 is 30 o . Therefore, the acceptance angle φ flat of the flat end of the plastic optical fibre is φ flat  = 60 o .

Acceptance angle of the plastic optical fibre with a spherical microlens

Here, we analyse the acceptance angle of the plastic optical fibre with a spherical or conical microlens (Materials and methods: Acceptance angle of the plastic optical fibre capped with a conical microlens) based on light paths. When the light is reflected from the upper core/cladding interface (Fig. 6c ), it is first refracted at the fibre/microlens interface and then at the microlens/air interface. Let ρ , R and n l represent the radial position at the microlens surface, the radius of the microlens surface and the refractive index of the microlens, respectively. Then, ∠ 3 and ∠ 5 can be calculated as follows:

Since ∠ 5 and ∠ 4 are complementary, we have

Based on the law of refraction, we have

Finally, we have that

In addition, the angle of the tangent line (Supplementary Fig. S8 ) at the curved surface is defined as follows:

Thus, the upper limit angle α upper should be the smaller value between ∠ 8 and the angle of the tangent line. The analytic results are plotted in Supplementary Fig. S9a for microlenses with different radii.

On the other hand, when light is reflected from the lower core/cladding interface (Fig. 6d ), ∠ 3, ∠ 5 and ∠ 7 can be calculated with Eqs. ( 4a ), ( 4b ) and ( 8 ), respectively. However, ∠ 4 becomes

Since ∠ 6 = ∠ 7 + ∠ 8, ∠ 8 can be expressed as

This gives the lower limit angle α lower  =  ∠ 8. The analytic results are plotted in Supplementary Fig. S9b for microlenses with different radii.

Based on the analysis of the optical path, if α lower ≤ 0, the half acceptance angle φ sm should follow φ sm /2 = α upper ; here, the subscript sm represents the spherical microlens. In contrast, if α lower  > 0, the half acceptance angle should follow φ sm /2 = α upper - α lower . The analytic results are plotted in Supplementary Fig. S9c for microlenses with different radii, assuming that the material of the microlenses is NOA81 and the refractive index n l is 1.56 (NOA81, a widely used UV-curable optical adhesive, meets both economic and optical practicality requirements, making it highly suitable as the material for microlenses). In this case, a hollow area is observed in the centre of the acceptance area. A detailed explanation of the cone will be presented below.

According to the theoretical analysis results (Supplementary Fig. S9c ), we find that:

The acceptance angles φ sm at different radial positions ρ vary significantly with the microlens radius R , increasing the difficulty of design and analysis.

In many cases, the acceptance angle has φ sm  > 60 o , which is similar to the acceptance angle of 60 o of a bare plastic optical fibre. Thus, the use of a spherical microlens does not reduce the acceptance angle.

Therefore, the spherical microlens is not a good choice if the acceptance angle of the optical fibre needs to be reduced. Instead, we use a conical microlens to reduce the acceptance angle (Fig. 7 ).

figure 7

a The spherical microlens has a larger acceptance angle than the flat-end optical fibre (i.e., φ 3  >  φ 1 ). b The conical microlens has a smaller acceptance angle (i.e., φ 2  <  φ 1 ). Therefore, the use of conical microlenses can effectively narrow the acceptance angle

Acceptance angle of the plastic optical fibre capped with a conical microlens

The acceptance angle (or equivalently, the divergence angle) is strongly affected by whether the light experiences total internal reflection at the conical surface. Thus, we consider two cases, no reflection (Fig. 6e, f ) and one hop (Fig. 6g, h ). Here “hop” means “reflection” at the conical surface (not at the core/cladding interface). These two cases are discussed separately below.

No reflection at the conical surface of the microlens

When light is reflected from the upper core/cladding interface (Fig. 6e ), ∠ 3 follows Eq. ( 4a ). In addition, based on the geometrical relationship, ∠ 5 and ∠ 6 follow

where θ is half-apex angle of the cone.

Then, based on the law of refraction, we have

Since ∠ 9 = ∠ 7 + ∠ 8, ∠ 8 is defined as

Similarly, the upper limit angle α upper should be the smaller value of ∠ 8 and the angle of the tangent line, which is equal to θ . In φ cm , the subscript cm denotes the conical microlens.

When light is reflected from the lower core/cladding interface (Fig. 6f ), ∠ 3 still follows Eq. ( 4a ). In addition, based on the geometrical relationship, ∠ 4 follows

Therefore, ∠ 5 follows

Based on the geometrical relationship, ∠ 7 follows

Then, according to the law of refraction, ∠ 9 follows

Similarly, the lower limit angle α lower should be the smaller value of ∠ 8 and the angle of the tangent line, which is equal to θ . In addition, when α lower ≤ 0, the half acceptance angle φ cm (or equivalently, the half divergence angle) follows φ cm /2 = α upper (Supplementary Fig. S10a ), and when α lower  > 0, φ cm follows φ cm /2 = α upper – α lower (Supplementary Fig. S10b ). Thus, we find that the acceptance angle of the cone is independent of the radial position ρ of the point at which the light hits the conical surface and is only dependent on the half-apex angle θ . This feature is very different from that of the spherical microlens and also makes it easy for analysis.

One hop at the conical surface of the microlens

When the light is reflected from the upper core/cladding interface of the optical fibre, ∠ 3 still follows Eq. ( 4a ) (Fig. 6g ). Based on the geometrical relationship, ∠ 5 and ∠ 6 can be expressed as

Since the sum of the angles in the quadrilateral is 2 π , we can formulate the following relationship:

Next, because

we have that

In addition,

Moreover, the angle of the tangent line is – θ . Therefore, when ∠ 12 ≥ 0, the upper limit angle should be ∠ 12, and when ∠ 12 < 0, the upper limit angle should be the smaller one of the absolute values of ∠ 12 and the tangent angle (i.e., the minimum of abs( ∠ 12) versus θ , here abs means taking the absolute value).

When the light is reflected from the lower core/cladding interface of the optical fibre (Fig. 6h ), ∠ 3 and ∠ 6 follow Eqs. ( 4a ) and ( 23 ). Then, based on the geometrical relationship, ∠ 5 is defined as

Based on the sum of the angles in the quadrilateral, we have

Then, we have

In addition, we have

Therefore, we find that

Therefore, when ∠ 12 ≥ 0, the lower limit angle should be ∠ 12, and when ∠ 12 < 0, the lower limit angle should be the smaller of the absolute values of ∠ 12 and the angle of the tangent line (i.e., the minimum of abs( ∠ 12) versus θ ).

Analysis of the acceptance angle

Based on theoretical optical path analysis, when θ  > 31 o , the lights emitted by the optical fibres that first impact the conical surface are directly refracted (corresponding to the no-reflection case discussed above; Fig. 6e, f ), while the one hop case can be ignored. In contrast, when θ  < 31 o , the lights that first impact the conical surface experience total internal reflection and thus are reflected once before going out (corresponding to the one hop case discussed above; Fig. 6g, h and the red line in Fig. 8 ). Nevertheless, the one hop case has low output energy. Therefore, we ignore the one hop case in the following discussions.

figure 8

The theoretical analysis is presented using three distinct colour lines to illustrate different scenarios. (1) When θ  ≥ 43 o , the green line represents the case in which light is directly emitted from the conical surface of the microlens without any reflection, and there is no hollow region within the emission pattern. However, the acceptance angle is too large ( > 60 o ). (2) When 31 o  ≤  θ < 43 o , the cyan line represents the case in which light is directly emitted from the conical surface of the microlens without reflection. The acceptance angle is narrowed when θ goes smaller, but a hollow central region appears in the emission pattern. Equivalently, if the fibre collects light, the information in the central hollow region cannot be detected, which is unfavourable. The star highlights the working conditions used in our experiments, i.e., θ  = 35 o and an acceptance angle of 45 o . By rounding the sharp tip of the cone, the hollow central region can be eliminated from the emission pattern (the inset in the lower right part). (3) When θ  < 31 o , the red line represents the case in which the light undergoes a single reflection (or hop) in the conical microlens. The hollow central region reappears and the transmitted light intensity is very low. Therefore, this case is not suitable for collecting the light

Supplementary Fig. S10a and Fig. S 10b depict two scenarios of light transmission from the cone. When α lower ≤ 0 ( θ ≥ 43 o ), the critical light from the lower core/cladding interface travels upwards, and the light projected onto the receiving surface forms a circle on the observation screen. Since the microlens is axisymmetric, the actual acceptance area (or equivalently, the divergence area) has a circle shape with no hollow region (Supplementary Fig. S10a ). Consequently, the acceptance angle of the cone is determined by the larger absolute value between α upper and α lower . Notably, the absolute value of α upper is consistently larger than α lower based on the theoretical analysis. Therefore, the half acceptance angle φ cm /2 of the conical microlens is ultimately determined by α upper (i.e., φ cm /2 = α upper ) when α lower ≤ 0 (Supplementary Fig. S10a , the green line segments in Fig. 8 ).

However, when α lower  > 0 (31 o < θ  < 43 o ), the critical light from the lower core/cladding interface travels downwards, and the light projected onto the receiving surface forms a circle on the observation screen (Supplementary Fig. S10b ). The emission pattern on the observation screen is a ring with a hollow region at the centre. In this case, the half acceptance angle of the cone is determined by the difference between both critical angles, that is, φ cm /2 = α upper - α lower (Supplementary Fig. S10b , the cyan line segment in Fig. 8 ). The hollow central region is unfavourable since light information in that angular range is lost.

To eliminate the hollow central region, the tip of the cone can be rounded (Supplementary Fig. S10c ). With the rounded tip, the cone can project light to the central region. Equivalently, when used to receive light, the rounded tip can accept light from the central region.

Optical tracing simulation

An optical tracing simulation is conducted to verify the designs of the cone. Under each condition, the light observation screen is placed 15 cm away from the cone, the cone bottom radius is 0.15 mm, the cone height is determined by θ , and the refractory index at 550 nm is 1.56. The cone sits directly on the flat end of an optical fibre with the following parameters: length, 10 mm; core material, PMMA; core diameter, 0.24 mm; core refractory index, 1.4936 at 550 nm; cladding material, PTFE; cladding diameter, 0.25 mm; and cladding refractory index, 1.4074 at 550 nm. These parameters are consistent with the optical fibres employed in this study. The light is introduced into the other end of the optical fibre and emitted from the cone, forming an emission pattern (i.e., a light intensity distribution) on the observation screen (Supplementary Fig. S10 ). As θ varies, the emission pattern changes considerably.

We consider the acceptance angle and the corresponding simulated optical field patterns for conical microlens optical fibres at different half-apex angles θ (Fig. 8 ). When θ ≤ 31 o , no light is directly emitted from the cone. Hence, θ  = 31 o is the minimum half-apex angle of the conical microlens. As θ decreases ( θ  < 31 o ), the light is reflected one or more times (hops) within the cone before being emitted, and the emission pattern is finally displayed (the red line segments in Fig. 8 ). When 31 o  <  θ  < 43 o , the acceptance angle increases with larger θ , and a hollow region appears in the middle of the emission pattern (the cyan line segment in Fig. 8 ). When 43 o  ≤  θ  ≤ 68 o , the acceptance angle increases further with increasing θ , and the emission pattern is a solid circle (the straight part of the green line segment in Fig. 8 ). When θ  > 68 o , the acceptance angle decreases as θ increases, and the emission pattern is still a solid circle (the curved part of the green line segment in Fig. 8 ). The simulation results are quantitatively consistent with the theoretical results.

Choice of shape and size of microlenses

In the ACEcam, the interommatidial angle ∆Φ is 12.2 o , while the acceptance angle of the flat-end plastic optical fibre is 60 o , causing severe overlap between the views of adjacent ommatidia (Supplementary Fig. S1a ). Based on the above analyses, we choose a conical microlens with the half-apex angle θ  = 35 o (marked with a star in Fig. 8 ). This value serves as a balance between the acceptance angle and the output energy. Specifically, when θ exceeds 35 o , the acceptance angle becomes elevated; conversely, if θ falls below 35 o , the resultant output energy diminishes to an impractical extent. Correspondingly, the acceptance angle is 45 o . At θ  = 35 o , a hollow centre in the emission pattern cannot be avoided (Fig. 8 ). To address this issue, the sharp tip of the conical microlens is rounded. With this approach, the rays passing through the rounded part of the conical microlens are refracted towards the central part of the emission pattern, eliminating the hollow centre (Fig. 8 ). In summary, we choose a conical microlens with a rounded tip, a half-apex angle θ of 35 o , and an acceptance angle is 45 o to ensure that the emission pattern has no hollow centre.

Fabrication of a conical microlens on an optical fibre

Here, we present a pioneering method for fabricating a conical microlens on an optical fibre. First, a template with a conical groove array was designed with 3D CAD (Computer Aided Design) software (Fig. 9a ). The size of the conical groove is equal to the abovementioned conical microlens, and four ‘+‘ markers are positioned out of plane at the four corners. Then, this template was fabricated using an ultrahigh precision 3D printing method (microArch® S140, BMF Precision Tech Inc.). Next, polydimethylsiloxane (PDMS) is used to transfer patterns of the template (Fig. 9b ). This PDMS mould has convex cones and positioning grooves (Fig. 9c ).

figure 9

a A 3D-printed template with an array of conical grooves and 4 protruded ‘+‘ alignment markers at the corners. The enlarged view shows that the conical surface of each groove has a layered texture and is not smooth. b Polydimethylsiloxane (PDMS) is used to transfer patterns. c The first PDMS mould. d Physical vapour deposition (PVD) and electroplating. e Cu-coated mould. The inset shows that the layered texture is smoothened and the tip of the cone is rounded. f PDMS is used to transfer patterns again. g The second PDMS mould with conical grooves. h The same volume (~0.15 μL) of NOA81 liquid is deposited into each conical groove. i A 3D-printed optical fibre buncher with many through holes. j UV light is used to cure the conical microlenses on top of the optical fibres

To smooth the rough surface of the convex cones due to the layered texture caused by the 3D printing method and to round the sharp tip of the convex cones, the PDMS mould is electroplated. A several nanometre thick Cu layer was coated on the PDMS surface using the physical vapour deposition method (PVD). The coated PDMS was then electroplated with Cu for 7 h at 1 A/dm 2 electric current density (Fig. 9d ). Since Cu has a faster deposition rate at positions with higher current density, the edges of the layered texture are connected, forming a smooth surface, and the sharp tip of each convex cone is rounded (Fig. 9e ).

Subsequently, the Cu-coated PDMS mould was transferred to another PDMS mould (Fig. 9f ), and this second PDMS mould has conical grooves and ‘+‘ positioning markers (Fig. 9g ).

The material used for the microlens is NOA81, which is liquid and UV curable. A microsyringe was used to deposit the same volume of NOA81 (0.15 μL/drop) into each conical groove (Fig. 9h ).

To mount each microlens on each optical fibre end in a batch process, an optical fibre buncher was designed and fabricated by the 3D printing (Fig. 9i ). The optical fibre buncher has 4 ‘+‘ positioning trenches and an array of through-holes, each corresponding to the position of a conical groove in the second PDMS mould. The through-holes have a diameter of 0.28 mm, which is slightly larger than the diameter of the plastic optical fibre (0.25 mm) to address potential fabrication errors with the 3D printing method (±0.025 mm).

Thereafter, the optical fibre buncher was mounted on the second PDMS mould by carefully aligning the positioning trenches of the former to the protruded positioning markers of the latter under an optical microscope. As a result, each through-hole in the optical fibre buncher is well aligned to one conical groove in the second PDMS mould. Then, the optical fibres were manually threaded into the through-holes to contact the NOA81 microlenses (Fig. 9j ), followed by UV illumination to cure the NOA81. With this approach, each conical microlens was firmly fixed on the end of the corresponding optical fibre.

Oxygen inhibits the free-radical polymerisation of liquid NOA81, and the permeability of PDMS in air ensures that an ultrathin surface layer of NOA81 remains uncured near each PDMS surface, though most of the body part of NOA81 is already hardened 37 , 38 . This uncured layer facilitates the easy detachment of the NOA81 microlenses from the second PDMS mould. Finally, many conical microlens optical fibres ( ~ 200 pieces) are obtained with this batch process. Interestingly, the moulds and the fibre buncher can be reused for more fabrication runs of conical microlens optical fibres.

In the future, we aim to simplify this manual process through automation. There are two possible approaches to achieve this automation:

AI-assisted robots: Artificial intelligence (AI) can be utilised to identify the through-holes in both the dome and the buncher. Subsequently, AI can control a robotic arm to insert the plastic microlensed fibres into these holes. The integration of AI with industrial processes has become increasingly popular in recent years due to the rapid advancements in AI technology, and it has the potential to significantly enhance automated fabrication.

Liquid waveguides: Traditional plastic optical fibres could be replaced with new liquid optical guides. Although a recent study attempted to use liquid optical guides by filling silicone elastomer into hollow pipelines within a 3D-printed black substrate 16 , the waveguiding effect was not well achieved due to the layered texture and black colour of the 3D-printed pipeline inner surfaces. Moreover, that recent study did not address the optical design criteria essential for optical waveguides in ACEs. In the future, we can develop microlensed liquid optical guides, consisting of a microlens, liquid optical guide core, and cladding, based on the design criteria discussed in this article. Further, these microlensed liquid optical fibres may be incorporated into the ACEcam components through spin coating.

Fabrication of the PDMS mould

A transparent elastomeric PDMS material and a curing agent are mixed with a mass ratio of 10:1.

The mixture is stirred thoroughly.

The mixture is centrifuged for 2 min at a speed of 1400 rpm to remove bubbles.

The mixture is poured on the surface of the conical groove template prefabricated by the ultrahigh precision 3D printing method.

The PDMS layer on the template is placed into a vacuum pump under a vacuum environment for 3 h.

The PDMS layer on the template is annealed at 85 o C for 45 min.

The cured PDMS layer is peeled off from the template, forming the PDMS mould.

Fabrication of liquid microlenses

A microsyringe (volume 0.15 μL/drop) is used to hold the NOA81 liquid.

The microsyringe is used to inject the same volume of NOA81 into each conical groove.

Mounting the microlenses on the optical fibre

The ‘+‘ positioning trenches in the optical fibre buncher are precisely aligned with the protruded ‘+‘ positioning markers in the second PDMS mould under an optical microscope.

The optical fibres are individually inserted into the through-holes in the optical fibre buncher until they contact the conical grooves filled with liquid NOA81.

The liquid NOA81 is cured by UV illumination for 2 min.

The optical fibre buncher is carefully removed from the other end of the optical fibres.

The conical microlens optical fibres are removed from the second PDMS mould and further UV cured.

Alignment error analysis

The alignment markers on the PDMS mould and positioning trenches in the optical fibre buncher are designed to be the same size to ensure precise alignment. During the alignment process, the elasticity of the PDMS allows the alignment markers on the PDMS to fit into the positioning trenches in the optical fibre buncher, despite being the same size. This alignment process mimics the hard contact alignment used in common multiple photolithography, ensuring high positioning accuracy across different layers. Therefore, the alignment accuracy is correspondingly high.

The alignment error primarily arises from the fabrication error of these markers, which is less than 0.025 mm (microArch® S140, BMF Precision Tech Inc.). This fabrication error can be further reduced by using devices with a higher precision.

Based on our optical path analysis of this alignment error, when the optical axis of the optical fibre and microlens are slightly deviated (~10 µm), the relationship between the upper limit angle and the half-apex angle (Supplementary Fig. S11a ), and the lower limit angle and the half-apex angle (Supplementary Fig. S11b ) still comply with the principles described in Materials and methods: Acceptance angle of the plastic optical fibre capped with a conical microlens. Thus, this slight error can be considered negligible. Only when the deviation becomes too large (Supplementary Fig. S11c ), the acceptance angle of the optical fibre deviates significantly, which would affect the detection performance. Therefore, utilising four markers to ensure a hard contact alignment is crucial for maintaining optimal performance.

Calculation of maximum angular velocities

Using a cmos chip as the photodetection unit.

In our testing setup (Fig. 4c ), the moving object is mimicked by 5 equally spaced and sequentially driven LEDs. The delay time Δ t 1 when one object moves from one side of the dome to the opposite side can be determined by the FOV and the angular speed ω as follows:

The moving object is mimicked by 5 sequentially driven LEDs; thus, Δ t 1 should be greater than or equal to the response time Δ t dec of the photodetector,

The combination of both equations gives

and, thus, the maximum angular velocity is given by

The response time of a CMOS chip with a frame rate of 30 Hz is 33.3 ms. Correspondingly, the highest angular perception speed is

Using a photodiode array as the photodetection unit

When a photoiode array with 5 electromagnetically shielded photodiodes (ElecFans) is used for photodetection, the response time is 31.9 μs (equivalently, 31.3 kHz). In this case, the highest angular perception speed is

Data availability

The data that supports the plots within this paper and other findings of this study are available from the corresponding authors upon reasonable request.

Code availability

The codes that support the findings of this study are available from the corresponding authors upon reasonable request.

Hooke, R. Micrographia, or, Some Physiological Descriptions of Minute Bodies Made by Magnifying Glasses: with Observations and Inquiries Thereupon (Jo. Martyn and Ja. Allestry, 1665).

Darwin, C. R. On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life. (John Murray, London, 1859).

Book   Google Scholar  

Exner, S. Die Physiologie der Facettirten Augen von Krebsen und Insecten: Eine Studie. (Franz Deuticke, Leipzig, 1891).

Agi, E. et al. The evolution and development of neural superposition. J. Neurogenet. 28 , 216–232 (2014).

Article   Google Scholar  

Shinomiya, K. et al. The organization of the second optic chiasm of the Drosophila optic lobe. Front. Neural Circuits 13 , 65 (2019).

Duparré, J. et al. Thin compound-eye camera. Appl. Opt. 44 , 2949–2956 (2005).

Article   ADS   Google Scholar  

Dudley, R. The Biomechanics of Insect Flight: form, Function, Evolution (Princeton University Press, 2000).

Wei, K., Zeng, H. S. & Zhao, Y. Insect–Human Hybrid Eye (IHHE): an adaptive optofluidic lens combining the structural characteristics of insect and human eyes. Lab Chip 14 , 3594–3602 (2014).

Brückner, A. et al. Thin wafer-level camera lenses inspired by insect compound eyes. Opt. Express 18 , 24379–24394 (2010).

Jeong, K. H., Kim, J. & Lee, L. P. Biologically inspired artificial compound eyes. Science 312 , 557–561 (2006).

Floreano, D. et al. Miniature curved artificial compound eyes. Proc. Natl Acad. Sci. USA 110 , 9267–9272 (2013).

Song, Y. M. et al. Digital cameras with designs inspired by the arthropod eye. Nature 497 , 95–99 (2013).

Lee, M. et al. An amphibious artificial vision system with a panoramic visual field. Nat. Electron. 5 , 452–459 (2022).

Wu, D. et al. Bioinspired fabrication of high-quality 3D artificial compound eyes by voxel-modulation femtosecond laser writing for distortion-free wide-field-of-view imaging. Adv. Opt. Mater. 2 , 751–758 (2014).

Kogos, L. C. et al. Plasmonic ommatidia for lensless compound-eye vision. Nat. Commun. 11 , 1637 (2020).

Dai, B. et al. Biomimetic apposition compound eye fabricated using microfluidic-assisted 3D printing. Nat. Commun. 12 , 6458 (2021).

Phan, H. L. et al. Artificial compound eye systems and their application: a review. Micromachines 12 , 847 (2021).

Deng, Z. F. et al. Dragonfly-eye-inspired artificial compound eyes with sophisticated imaging. Adv. Funct. Mater. 26 , 1995–2001 (2016).

Ma, M. C. et al. Super-resolution and super-robust single-pixel superposition compound eye. Opt. Lasers Eng. 146 , 106699 (2021).

Ma, M. C. et al. Target orientation detection based on a neural network with a bionic bee-like compound eye. Opt. Express 28 , 10794–10805 (2020).

Koike, Y. & Asai, M. The future of plastic optical fiber. NPG Asia Mater. 1 , 22–28 (2009).

Säckinger, E. Broadband Circuits for Optical Fiber Communication (John Wiley & Sons, Inc, Hoboken, 2005).

Lee, B. Review of the present status of optical fiber sensors. Opt. Fiber Technol. 9 , 57–79 (2003).

Liu, F. et al. Artificial compound eye-tipped optical fiber for wide field illumination. Opt. Lett. 44 , 5961–5964 (2019).

Flusberg, B. A. et al. Fiber-optic fluorescence imaging. Nat. Methods 2 , 941–950 (2005).

Chapman, J. A. Ommatidia numbers and eyes in scolytid beetles. Annal. Entomol. Soc. Am. 65 , 550–553 (1972).

Land, M. F. in Facets of Vision (eds Stavenga, D. G. & Hardie, R. C.) 90–111 (Springer, Berlin, Heidelberg, 1989).

Subbarao, M. & Gurumoorthy, N. Depth recovery from blurred edges. In: Proc. CVPR'88: The Computer Society Conference on Computer Vision and Pattern Recognition 498–503 (IEEE, 1988).

Subbarao, M. & Surya, G. Depth from defocus: a spatial domain approach. Int. J. Comput. Vision 13 , 271–294 (1994).

Lucas, B. D. & Kanade, T. An iterative image registration technique with an application to stereo vision. In: Proc. 7th International Joint Conference on Artificial intelligence (ed. Hayes, P. J.) 674–679 (Morgan Kaufmann Publishers Inc., 1981) https://researchr.org/publication/ijcai%3A1981 .

Fleet, D. J. & Langley, K. Recursive filters for optical flow. IEEE Trans. Pattern Anal. Mach. Intel. 17 , 61–67 (1995).

Chen, J. W. et al. Optoelectronic graded neurons for bioinspired in-sensor motion perception. Nat. Nanotechnol. 18 , 882–888 (2023).

Kelly, D. H. & Wilson, H. R. Human flicker sensitivity: two stages of retinal diffusion. Science 202 , 896–899 (1978).

Miall, R. C. The flicker fusion frequencies of six laboratory insects, and the response of the compound eye to mains fluorescent ‘ripple. Physiological Entomol. 3 , 99–106 (1978).

Juusola, M. et al. Information processing by graded-potential transmission through tonically active synapses. Trends Neurosci. 19 , 292–297 (1996).

de Ruyter van Steveninck, R. R. & Laughlin, S. B. The rate of information transfer at graded-potential synapses. Nature 379 , 642–645 (1996).

Lei, L. et al. Optofluidic planar reactors for photocatalytic water treatment using solar energy. Biomicrofluidics 4 , 043004 (2010).

Bartolo, D. et al. Microfluidic stickers. Lab Chip 8 , 274–279 (2008).

Download references

Acknowledgements

This work was supported by the Research Grants Council (RGC) of Hong Kong (15215620, N_PolyU511/20), Innovation and Technology Commission (ITC) of Hong Kong (ITF-MHKJFS MHP/085/22), The Hong Kong Polytechnic University (1-CD4V, 1-YY5V, 1-CD6U, G-SB6C, 1-CD8U, 1-BBEN, 1-W28S and 1-CD9Q), and National Natural Science Foundation of China (62061160488, 52275529). For technical assistance and facility support, special thanks go to UMF-Materials Research Centre (MRC) and UMF-Cleanroom (UMF-Cleanroom) of the University Research Facility in Material Characterization and Device Fabrication (UMF), University Research Facility in 3D Printing (U3DP), and Surface Engineering Unit of the Additive Manufacturing Stream, Industrial Centre (IC) of The Hong Kong Polytechnic University.

Author information

Authors and affiliations.

Department of Applied Physics, The Hong Kong Polytechnic University, 999077, Hong Kong, China

Heng Jiang, Chi Chung Tsoi, Mingjie Li & Xuming Zhang

Photonics Research Institute (PRI), The Hong Kong Polytechnic University, 999077, Hong Kong, China

Heng Jiang, Chi Chung Tsoi & Xuming Zhang

Key Laboratory of Spectral Imaging Technology, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, 710119, Xi’an, China

Anhui Province Key Laboratory of Measuring Theory and Precision Instrument, School of Instrument Science and Opto-Electronics Engineering, Hefei University of Technology, 230009, Hefei, China

Mengchao Ma

Department of Mechanical Engineering, The Hong Kong Polytechnic University, 999077, Hong Kong, China

Zuankai Wang

Research Institute for Advanced Manufacturing (RIAM), The Hong Kong Polytechnic University, 999077, Hong Kong, China

Xuming Zhang

You can also search for this author in PubMed   Google Scholar

Contributions

H.J. and X.Z. conceived the idea and designed the devices, H.J. and C.C.T. performed the experiments, W.X.Y., M.C.M. and M.J.L. helped the theory and the simulation, Z.K.W. assisted the manuscript preparation, H.J. and X.Z. finished the analysis and wrote the paper.

Corresponding author

Correspondence to Xuming Zhang .

Ethics declarations

Conflict of interest.

The authors declare no competing interests.

Supplementary information

Supplementary video 1, supplementary video 2, supplementary video 3, supplementary video 4, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Jiang, H., Tsoi, C.C., Yu, W. et al. Optical fibre based artificial compound eyes for direct static imaging and ultrafast motion detection. Light Sci Appl 13 , 256 (2024). https://doi.org/10.1038/s41377-024-01580-5

Download citation

Received : 27 February 2024

Revised : 28 July 2024

Accepted : 10 August 2024

Published : 18 September 2024

DOI : https://doi.org/10.1038/s41377-024-01580-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

convex lens focal length experiment

IMAGES

  1. Determination of focal length of convex lens experiment (CBSE) demo video by LabInApp

    convex lens focal length experiment

  2. Determining the Focal Length of a Convex Lens Experiment

    convex lens focal length experiment

  3. Lens Formula & Magnification

    convex lens focal length experiment

  4. Practical ways ways to find the principal focal length

    convex lens focal length experiment

  5. SP025 Experiment 5

    convex lens focal length experiment

  6. To find the focal length of a convex lens by plotting graphs between u and v or between 1/u and 1/v

    convex lens focal length experiment

VIDEO

  1. Convex lens

  2. 8587 Economy Optical System

  3. Experiment : To find focal length of convex lens #class12 #physics #boardexams #class12physics

  4. To find the focal length of convex lens class 12 physics|viva questions with answers|

  5. Focal Length of Convex Lens by Lens Displacement Method 11th Physics Lab Practical #professorbhaiyya

  6. CONVEX LENS

COMMENTS

  1. Determination of Focal Length of Concave Mirror and Convex Lens

    Principal focus: Principal focus is defined as a point at which the reflected rays meet or appear to meet for the spherical mirror. For a concave mirror, the principal focus is in the front and for a convex mirror, the principal focus is at the behind. Below is an experiment to determine the focal length of a concave mirror and convex lens.

  2. PDF Experiment 1010

    Put a small piece of paper. The ray diagram for finding the on one of the pins (say on image pin P′) to focal length of a convex lens. differentiate it from the object pin P′. 6. Displace the object pin P (on left side of the lens) to a distance slightly less than 2f from the optical centre O of the lens (Fig. E 10.3).

  3. To find the focal length of a convex lens by plotting graphs ...

    5. A convex lens of less than 20 cm focal length 6. Two sharp-edged needles. 7. Spirit level. Theory. For a body positioned at a distance 'u' from the optical centre of a thin convex lens of focal length 'f', an inverted and real image is generated on the lens's other side at a distance 'v' from the optical centre.

  4. PDF Experiment 5 Optics: Focal Length of A Lens

    Indicate the incoming and the outgoing rays with arrows in the appropriate directions. 2 The place where the five refracted rays cross each other is the focal point of the lens. Measure the focal length from the center of the convex lens to the focal point. Record the result in Table 6.1. 3.

  5. PDF L M Experiment1111

    important to note that the focal length of convex lens L 1 must be smaller than the focal length of the concave lens L 2. The second image A′′ B′′ is formed only when the distance between lens L 2 and first image A ′B′ is less than the focal length of L 2. The focal length of the concave lens L 2 can be calculated from the r elation ...

  6. PDF EXPERIMENT 5 Optics: Focal Length of a Lens

    (2) The place where the five refracted rays cross each other is the focal point of the lens. Measure the focal length from the center of the convex lens to the focal point. Record the result below: Convex Lens Concave Lens Focal Length (3) Repeat the procedure for the concave lens. Note that in Step 2, the rays leaving the lens are

  7. PDF Lab

    Finding the focal length of a convex lens. Put the lens at the center of the meter stick. Place the screen and screen support at the 75 cm ... experiment - this will be your object. Place the lens at a distance of more that 2 times the focal length from the object (Example - If your focal length was 20 cm than 2f = 2x20 = 40, so you would want ...

  8. Focal Length of a Concave Mirror and a Convex Lens using U-V Method

    Calculate the focal length of the given concave mirror by using the relation, f = uv∕ (u + v). Repeat the experiment for different values of u (up to 2. 5 f) and in each time, measure v and record it in the tabular column. Calculate the focal length (f) of the concave mirror each time.

  9. NCERT Class 10 Science Focal Length of Concave Mirror and Convex Mirror

    Focal Length of Convex Lens. Convex lens is bulge in the centre, i.e,, it is thicker in the middle and thinner at the edges. ... Focal Length Of Convex Lens Experiment Class 10 (ii) To determine focal length of a given convex mirror: Materials Required Wooden bench, convex lens, a lens holder, a screen fixed to a stand, a measuring scale; etc.

  10. PDF Experiment No. 13 Lenses

    3. Determination of the focal length of a convex lens and the focal length of a concave lens by the r'onj11.gate foci method: (e) Mount the light source S (Fig. 13~8) near one end of the optical bench. Mount the convex lens Lt such that the object distance SL1 . lies between ft and 2f 1, where ft is the approximate focal length of the convex lens.

  11. Determining the Focal Length of a Convex Lens

    Determining the Focal Length of a Convex Lens | Physics Experiment | Grade 10Watch our other videos:English Stories for Kids: https://www.youtube.com/playlis...

  12. Determination of Focal Lengths of Concave Mirror and Convex Lens

    Image formation by a convex mirror. In this experiment, we are going to determine the focal lengths (f) of both the devices using the above concept by obtaining the real and inverted image of a far object on a screen.. Procedure. Clean the surfaces of the mirror and lens using a solution of vinegar and water in the ratio 1:4.. Note down the least count of the meter scale.

  13. PDF Lenses

    and the image (paper). This distance is the focal length of the lens. (Using f o i 1 =1 +1, and o = ∞ so that o 1 = 0, gives f = i) When the paper is held at a distance from the lens equal to the lens's focal length, an image of the window forms on the paper. Accompanying sheet Lenses - Finding the Focal Length of a Convex Lens Hold the ...

  14. Geometric Optics

    How does a lens or mirror form an image? See how light rays are refracted by a lens or reflected by a mirror. Observe how the image changes when you adjust the focal length of the lens, move the object, or move the screen.

  15. Khan Academy

    Discover how convex lenses form images and how to use the lens equation with Khan Academy's engaging video lessons.

  16. Class 12 Physics practical reading To find the focal length of a convex

    The focal length f of a convex lens is related to the object distance u and image distance v by the lens formula, given by. Ray diagram: Procedure . 1. Get a sharp image of a distant object (say a tree) on a wall or a screen. ... The experiment should be performed in a well-lit room. Sources of Errors. 1. The needles are not at the same height ...

  17. 25.6: Image Formation by Lenses

    The magnifying glass is a convex (or converging) lens, focusing the nearly parallel rays of sunlight. Thus the focal length of the lens is the distance from the lens to the spot, and its power is the inverse of this distance (in m). Solution. The focal length of the lens is the distance from the center of the lens to the spot, given to be 8.00 cm.

  18. 16.3 Lenses

    Concave, convex, focal point F, and focal length f have the same meanings as before, except each measurement is made from the center of the lens instead of the surface of the mirror. The convex lens shown in Figure 16.25 has been shaped so that all light rays that enter it parallel to its central axis cross one another at a single point on the ...

  19. Determining the Focal Length of a Convex Lens

    Add a comment. In the method that you're using you need to make two measurements because you are using the relation 1 f = 1 do + 1 di 1 f = 1 d o + 1 d i so you need to measure the object and image distances, do d o and di d i. Another common technique for measuring the focal length is to find the image distance for an object that is distant ...

  20. 25.6 Image Formation by Lenses

    The power P of a lens is defined to be the inverse of its focal length. In equation form, this is. P = 1 f. 25.21. where f is the focal length of the lens, which must be given in meters (and not cm or mm). The power of a lens P has the unit diopters (D), provided that the focal length is given in meters.

  21. Determination of Focal Length Of Concave Lens Using Convex Lens

    Theory. We use the lens formula in this experiment to calculate the focal length of the concave lens: \ (\begin {array} {l}f=\frac {uv} {u-v}\end {array} \) Where, f is the focal length of the concave lens L 1. u is the distance of I from the optical centre of the lens L 2. v is the distance of I' from the optical centre of the lens L 2.

  22. Determining the focal length of a convex lens (no-parallax method)

    2. This is about the experiment where we determine the focal length of a convex lens using the graphical method, (the no parallax method). In the situation where the image moves opposite to the direction that I move my eye,do I have to move the observation pin towards my eye or away from my eye? optics. experimental-physics. lenses. Share. Cite.

  23. PDF Focal Length of Convex Lens Lab Report

    Lam Ka Yue Kenneth 6S (20) Date of experiment : 27/10/2005 Figure (5) 3. Repeat the measurement by adjusting the distance between lamp-housing and the screen (s) to roughly about 5.4F, 5.8F, 6.2F and 6.6F. 4. Calculate the focal length of the convex lens by using the formula given.

  24. Optical fibre based artificial compound eyes for direct static imaging

    This artificial compound eye with microlensed optical fibres faithfully mimics the anatomical structure of natural compound eyes and thus can achieve 180o superior static and dynamic perception.