Skip to main content
All CollectionsApplication Tutorials
Tutorial: How to create a high-speed 3D video
Tutorial: How to create a high-speed 3D video
R
Written by Roberto Gonzalez, PhD
Updated over 8 months ago

This document describes another application that the user can explore using a couple of Chronos high-speed cameras. This is an interesting application that can be used for marketing, entertainment or scientific purposes.

Here, we detail the steps followed to create a high-speed 3D video using a pair of 1.4 Chronos cameras. The setup used to capture the footage is the same used to double the frame rate using two 1.4 Chronos cameras and a 50-50 beam splitter, explained here, see method 2 on page 4.

A. Test setup

When recording high-speed footage for a 3D movie, camera synchronization and spatial alignment are crucial to ensure that both cameras are capturing the same scene.

We synchronize the cameras using a function generator, FG , Rigol DG182 10 MHz. The signals from the FG’s channel 1 and 2 are sent simultaneously to each camera. A single channel can be used to trigger both cameras, as depicted in figure 1. However, the color camera requires more light than the monochrome unit. To compensate for this the duty cycle of channel 2 is set to twice that of channel 1. This results in an exposure time of 20 μs for the color camera and 10 μs exposure time for the monochrome camera.

The scene is illuminated by a Godox SL300 LED lamp. Diffusive paper is placed in front of the lamp to create an area of uniform light intensity. The trigger settings can be seen in figure 2. The settings apply to both cameras. Each camera is mounted on an X-Y translation stage, resolution 10 μm, to align their FOV accurately. A test begins by pressing the trigger switch. The FG sends out 5000 cycles to each camera to record the event. Meanwhile, a droplet of milk is dispensed from a syringe located 0.75 m above a glass full of coffee. The FG and camera’s signals are captured by an oscilloscope, Rigol DS1104 1 GSa/s.

Figure 1. Schematic diagram (top view) to trigger the cameras simultaneously. Both cameras record the same scene using a 50-50 beam splitter.

B. Trigger settings

Figure 2 displays the trigger settings used to trigger the cameras. The cameras are triggered simultaneously by the function generator. The IO 2 section portrays the Frame Sync Output and the 20mA Pullup options highlighted. The camera’s output signal was sent to an oscilloscope to verify the correct timing of the cameras’ recording. A typical set of signals is shown in figure 3.

Figure 2. Trigger settings. The settings apply to both cameras

The signals from the function generator and the cameras can be seen in Figure 3. From top to bottom the yellow and light blues traces correspond to the function generator’s signals from channel 1 and 2, respectively. The pink trace is the output signal from camera 1 and the dark blue trace is the output signals from camera 2.

Both cameras are triggered simultaneously, thus they start recording soon after they receive the FG’s signal. On the right side, the figure shows the time delay of camera 1 and camera 2 with respect to the FG’s channel 1 and 2, respectively. Notice the short time delay of only 225 and 235 ns. Interestingly, the net delay between the two cameras is only 10 ns.

Figure 3. Left. Trigger signals from the function generator and the cameras’ output signal. Right. Zoom around the starting time of the recording. The function generator’s signals from channel 1 and 2 are the yellow and light blues traces, respectively. The signals from cameras 1 and 2 are depicted by the pink and dark blue traces, respectively.

C. Recorded footage

The images recorded by both cameras can be seen in figure 4.

Figure 4. A. Footage from camera 1. B. Footage from camera 2. C. Footage from camera 2 after flipping the images horizontally.

The images are shown in the first and second rows respectively. The images from camera 2 need to be flipped as they are viewing the scene in the reflection of the 50-50 beam splitter. The images are then flipped horizontally, this can be seen in the bottom row, figure 4C. The position of the satellite droplet about to detach in the third image on the right is approximately the same in figure 4A and 4C.

Next, we need to create the videos for the left and right eye. To do so we take the video from each camera and select only the red or green component of it. Figure 5 shows the images after performing these operations.

Figure 5. Top. Footage from camera 1 after selecting only the red component. Bottom. Footage from camera 2 with the green component selected.

The images need to be combined. They should look like the images depicted in figure 6. We use DaVinci Resolve and follow the steps provided in the following video: https://www.youtube.com/watch?v=rfUn9iuaFO8

Note to the user: The footage from the cameras can be processed to get the red and green components. Also, the images can be flipped and merged using DaVinci Resolve. We are not promoting this software, we used it since it is convenient. However, the steps described here can be carried out using other open source packages like ImageJ or Python.

Once the footage from the two cameras is processed the final footage looks like this. (link to the video). You need to have access to 3D lenses though, so you can see the depth effect in the movie.

Figure 6. Anaglyph images after combining the images portrayed in figure 5.

D. Conclusion

The present document describes the way to create an anaglyph video. The footage requires careful setup to match the footage from the two cameras precisely.

The video was created with DaVinci Resolve. However, it is possible to use other open source software packages to produce an anaglyph video.

There are two important factors to take into account when producing the video:

  • The amount of offset of the two sets of images. Under COLOR in DaVinci Resolve, look for the 3D Tab approximately at the center of the screen, click on it and Under 3D stereo > vision input the amount of offset in the Conv. field.

  • The speed at which the video is to be shown. Too fast and the viewer may not catch the important details, too slow and the video fails to catch the attention of the viewer. This is at the discretion and experience of the video producer.

Notice that the merged footage displays the time stamp at the lower part of the image. This part can be removed as it does not affect the output of the final video.

Did this answer your question?