All Collections
Application Tutorials
Doubling the Frame Rate of a Pair of Chronos Cameras
Doubling the Frame Rate of a Pair of Chronos Cameras
R
Written by Roberto Gonzalez, PhD
Updated over a week ago

This application document describes how to set up two Chronos high-speed cameras to increase the frame rate by two fold. Two different methods are discussed to accomplish this. To trigger the cameras by an external source, please see: Page 12 of the Triggering & Sync Guide Here

Doubling the frame rate of a camera can be useful to obtain more detail from a particular event. The viewing area remains the same. In this article, a short test is performed to describe how this can be achieved. The synchronization of the camera’s triggering timing is then validated with an oscilloscope.


1. Double the frame rate using two independent cameras


A. Test setup

Two Chronos 1.4 high-speed cameras, one monochrome and one color, are positioned as displayed in Figure 1.

A droplet is dispensed by a syringe located 0.85 m from the surface of a steel plate that the droplet impacts. The diameter of the droplets is D = 4.4 mm. The velocity of the droplet approaching the plate is 3.72 m/s.

The cameras record the droplet as it approaches the plate and spreads onto it. The trigger switch is used to trigger a Function Generator (FG), Rigol DG182 10 MHz. The latter sends out signals from its channels 1 and 2 to cameras 1 and 2, respectively.

Camera 1 is a Chronos 1.4 Monochrome camera and camera 2 is a Chronos 1.4 color camera. Both cameras have a 50 mm Rokinon lens attached, f#1.7. The resolution of camera 1 is 40.6 μm/pixel while that of camera 2 is 39.9 μm/pixel. The cameras are set to record 5,000 frames maximum. When the number of cycles ends the recording and the footage recorded can be reviewed and saved.

Figure 1. Schematic representation of the setup used to double the recording frame rate


B. Time delay validation with an oscilloscope

The signals from the function generator and both cameras were recorded by an oscilloscope, RIGOL MSO5074, to verify the time delay between them.

Figure 2 presents the signals recorded, traces 1 and 2 correspond to the function generator's channels 1 and 2, respectively. Channel 3 corresponds to the output signal from camera 1 and channel 4 is the output signal from camera 2. In the image the time delay of 500s between the function generator’s channels 1 and channel 2 is annotated.

Figure 2. Oscilloscope screen. The function generator’s signal from channel 1 and 2 correspond to the yellow and light blues traces. The signals from cameras 1 and 2 are represented by the pink and dark blues traces, respectively.

Next, we want to check the time delay between channel 1 and camera 1 and channel 2 and camera 2.

Figure 3 portrays a zoom of the region right after the start of the recording. The time delay between the function generator’s output signals and the cameras. The left side is the delay of 440 ns from the time the function generator’s signal, from channel 1, is sent and the time when camera 1 is recording. The right side depicts the delay between the function generator’s signal, channel 2 and camera 2, 240 ns.

Figure 3. Left, time delay between the function generator’s output signal 1 and camera 1. Right, time delay between the function generator’s output signal 2 and camera 2.

The goal of this exercise is to increase the frame rate from 1,000 FPS to 2,000 FPS. The interframe decreases from 1 ms to 500 μs. However, due to the delay of the camera to start recording immediately a temporal error is introduced.

We can take the time delay as the upper limit of the temporal uncertainty, Δt for the start of the recording. Thus we have Δt = 0.440 μs / 500 μs = 0.00088 or 0.09%. This small temporal error does not affect the visualization of the event to be recorded. A similar time delay, 250 ns, was recorded when the camera was set to operate at 40k FPS - please see the oscilloscope traces in Appendix 1.

Therefore, temporal uncertainty is Δt = 0.250 μs / 12.5 μs. It increases due to the shorter interframe time, to 2%. Yet, this short uncertainty does not affect the visualization of the event to evaluate in a considerable manner. The time delay for frequencies between 1 and 40 kHz can be seen in the Appendix section, figures A.1 and A.2.

C. Footage captured

Figure 4 portrays images taken from camera 1, upper row, and those recorded by camera 2, lower row.

The interframe time in each sequence is 1 ms. The images were post-processed to match their intensity. In the first frame, upper row, the droplet is not in the image. Then, due to the time delay between the cameras, the first frame of the lower row portrays the lower part of the droplet in it.

The vertical position of the droplet in the two sequences is different due to the time delay between the two cameras. In the last frame of the upper row the droplet is about to make contact with the plate. In the lower row, last frame, the droplet already impacted the plate.

Figure 4. Upper row. Images from camera 1. Lower row. Images from camera 2. In both sequences the interframe time is 1.0 ms. Time delay between cameras is 500 μs.


The images can then be combined into a single sequence. This is shown in figure 5, in which the interframe time is 0.5 ms, 2,000 FPS.

One can see the droplet motion in more detail. In particular between frames 19 and 21 from the left, now we can see an extra image that we would have missed recording at 1000 fps.

Figure 5. Images from both cameras combined. The interframe time is 0.5 ms.

Note that the video footage shows that the droplet expansion on the solid surface is not axisymmetric. Due to this, when the footage from cameras 1 and 2 is merged, small differences in the images become apparent, demonstrating the usefulness of this framerate-doubling technique.

A sample frame is portrayed in figure 6. The image on the left was captured approximately 5 ms after the droplet impacted the plate. The picture on the right was captured 0.5 ms after the image on the left.

The arrow on the image on the right side points to a feature of the spreading drop that is not visible in the frame on the left side. That feature is on the back side of the image portrayed on the left side, thus not visible in it.

Figure 6. Left side, image from camera 1. Right side, image from camera 2.
Notice the prominent drop, pointed by the arrow, not observed on the left side.

D. Conclusion

This application document shows it is possible to double the frame rate using two Chronos cameras. This can be a useful option in applications that require more detail of a particular event or the user wants to have a higher frame rate while keeping the full frame.

It is better, if possible, to use two monochrome cameras since they require less illumination. Additionally, the oscilloscope data show that the time delay between the signal received by the camera and the recording start of the event takes place within 500 ns. Thus even for the fastest frame rate the temporal uncertainty is small enough to provide sufficiently accurate footage.

The procedure to setup the cameras configuration can be seen here.

2. Double frame rate with a single field of view (FOV)

A. Test setup

In this section we describe another way to double the frame rate of footage captured by two Chronos cameras.

In the previous method discussed the cameras are positioned at an angle 2𝜃~55-60°. This time the cameras are positioned 90 degrees from each other. A 50-50 beam splitter is used to make the cameras observe the same perspective of the scene.

Also, both cameras are mounted on top of an X-Y translation stage. This allows a precise positioning of each of the cameras’ field of view. The function generator (FG) is triggered by a trigger switch cable. Then the latter sends out the signal to each camera. The function generator, Rigol DG182 10 MHz, sends out 5000 cycles, which are sufficient to record the event.

The scene is illuminated with a Godox SL-300 LED Illumination source and diffusive paper is added to create uniform light intensity. Figure 7 depicts the schematic diagram of the setup used. Both cameras are set to operate to the same frame rate and f# 2.8. However, the exposure time is different. It is 10 μs for camera 1 and 20 μs for camera 2. The color camera requires more light. To compensate for this, it is set to operate with a longer exposure time.

The expected temporal uncertainty due to the time delay of the cameras with respect to the signal from the function generator can be seen in Table A.1. Notice that the fps reported in table A.1 is that after combining the footage from both cameras.

Figure 7. Diagram of the setup to double the recording frame rate using a 50-50 beam splitter


Before the images can be merged to double the equivalent frame rate a few steps need to be conducted.

First, the images captured by camera 2, the color camera positioned 90 degrees from the object of interest need to be mirrored. Then, the images from both cameras are adjusted so the object of interest appears at the same position in both cameras. Finally, the intensity of both sets of images is adjusted to make each set display approximately the same intensity.

Figure 8 portrays an example of images taken when the cameras were set up, i.e., prior to the actual tests. The upper row clearly displays the screw in the image on the right side, from camera 2, on the opposite side with respect to the picture on the left side. The lower row displays both images after flipping the image on the right side.

A black line is added as an aid to the eye. It shows the adjustments needed so the cameras capture the same scene. The image on the left side needs to be rotated slightly. Also, the screw needs to appear a few pixels below to match its position on the image on the right side.

Figure 8. Upper row. Original images from cameras 1 and 2. Lower row. Image from camera 2, right side has been rotated horizontally (mirrored).


B. Footage captured

The sequences shown in figures 9 and 10 are very similar to those obtained with the independent FOV method, figures 4 and 5.

The vertical position of the droplet in the two sequences is different due to the time delay between the two cameras. In the last frame of the upper row the droplet made contact with the plate. In the lower row the droplet impacted the plate in the last two frames. Merging the two sequences shows the droplet trajectory and its spreading on the plate as seen in figure 10. The interframe time is 500 μs.

The depth of field is similar as both images display approximately the same features, this can be seen in figure 11. The front and back part of the spreading droplet is not in focus in both images.

Figure 9. Upper row, images from camera 1. Lower row, images from camera 2. Interframe time is 1.0 ms in both sequences. Time delay between cameras is 500 μs.

Figure 10. Images combined from both cameras. In the sequence, interframe time is 0.5 ms

Figure 11. Left side, image from camera 1. Right side, image from camera 2. With a single FOV approach the cameras display the same features in both frames. The image on the right side was recorded 500 μs after the image on the left side, ~5 ms after the droplet landed on the plate.


D. Conclusion

Both methods described allow the user to double the frame rate of the Chronos camera.

A few things to bare in mind when considering this type of video recording:

With this single FOV technique the images captured can be merged more accurately as they capture the same scene. The user needs to adjust the cameras to get the same field of view, depth of field (use the same f#) and similar image intensity. This is crucial to get the same perspective from each camera. It is recommended to use the same type of camera, i.e., monochrome or color, if possible and the same type of lens.

Before recording video the focal plane must be set. Also, you need to get approximately the same position with respect to the depth of field so the images appear as being from a single source.

The method requires the use of a beam splitter and a driving device. Here, we used a function generator. These items add cost to the recording setup, a factor that must be taken into account.

If you are interested in trying the methods presented in this tutorial, we recommend using an oscilloscope. This helps to keep track of the time delay between the cameras. We show that the time delay between the signal from the function generator and the camera can be ~250 ns, please see Appendix 1. Yet, we have recorded time delays of ~450 ns. This is still a short temporal uncertainty, but it is worth verifying in your application.



Appendix A

Here we show the time delay that exists between the signal sent to the cameras by the Function Generator (FG) and the time when the cameras start the exposure period.

The frequency range tested spans up to the maximum frame rate the Chronos 1.4 camera can operate at. The data was collected with a Rigol DS1104 1 GSa/s Oscilloscope. The images below depict the signals that correspond to four different frame rates, 1, 10, 20 and 40k fps. These frame rates correspond to inter frame times of 1 ms, 100 μs, 50 μs and 25 μs, respectively.

The signals that correspond to the function generator’s Channel 1 and camera 1 as well as the Function Generator’s Channel 2 and camera 2 are tightly aligned, thus the time delay is shorter than can be resolved by the horizontal display. Also notice that the rise time of the cameras’ signal is much shorter than that of the fall time. This is something that needs to be considered, especially in applications where the exposure time is close to the inter fame time.

Figure A.1 Signals of 1, 10, 20 and 40 kHz. The function generator’s signals from channel 1 and 2 correspond to the yellow and light blues traces. The signals from cameras 1 and 2 are represented by the pink and dark blue traces, respectively.

The time delay between the function generator’s signals and the cameras is portrayed next. The column on the left side depicts the time delay between the FG’s channel 1 and camera 1. The column on the right represents the time delay between the FG’s channel 2 and camera 2. From top to bottom the frequency tested is 1, 10, 20 and 40 kHz, respectively.

In these tests performed, as seen in the images, the time delay was bounded between 225 and 260 ns. After the images are merged the frame rate doubles. Table 1 depicts the frame rates and the expected temporal uncertainty (U) after the footage from both cameras is combined. Notice that the temporal uncertainty in Table 1 is an upper bound expected temporal error. Both cameras start recording after the time delay shown in the table, thus the net time delay between the cameras is shorter. Yet, the time delay depicted in Table 1 shows that even at the highest frame rate the expected temporal error is 2.1%.

Figure A.2. Time delay between the FG’s channel 1 and camera 1, left column. Time delay between the FG’s channel 2 and camera 2, right column. From top to bottom the frequency is 1, 10, 20 and 40 kHz.

FPS

Interframe time t, μs

Time delay FG-Cam1

Δt1, μs

Time delay FG-Cam2

Δt2, μs

U(FC-Cam1) Δt1 / t , %

U(FC-Cam2) Δt2 / t , %

2,000

500

0.240

0.230

0.05

0.05

20,000

50

0.225

0.250

0.45

0.50

40,000

25

0.230

0.240

0.92

0.96

80,000

12.5

0.225

0.260

1.80

2.08

Table A.1. Temporal uncertainty due to the time delay between the FG’s signal and camera’s recording start.

Did this answer your question?