Tip

Download this tutorial as a Jupyter notebook, or as a python script with code cells.

Tutorial 4: Motion Compensation

This tutorial introduces the motion compensation capabilities of optimap. Motion in optical mapping studies causes severe measurement artifacts, often referred to as ‘motion artifacts’ [Rohde et al.[1], Christoph and Luther[2], Lebert et al.[3], Kappadan et al.[4], Christoph and Ripplinger[5]]. To avoid these artifacts, the vast majority of optical mapping studies are conducted with pharmacological agents, such as Blebbistatin, which suppress the contractile motion of heart tissue by uncoupling the excitation-contraction coupling mechanism [Swift et al.[6]]. optimap includes numerical methods with which motion artifacts can be substantially reduced. Instead of suppressing motion pharmacologically, we can use optimap to track, stabilize and inhibit motion and motion artifacts numerically. In many cases, uncoupling agents are no longer needed and optical mapping studies can be performed with freely contracting tissues.

This tutorial explains how to process cardiac optical mapping recordings with motion and discusses best practices to reduce motion artifacts using numerical motion tracking. Tutorial 7 discusses an additional optical technique, ratiometric imaging, which can be applied in combination with numerical motion tracking to further reduce motion artifacts, see also Kappadan et al.[7].

We will discuss several examples which illustrate the effectiveness of numerical motion compensation as well as its limitations.

Motion can be compensated with just 1 line of code in optimap:

video_compensated = om.motion_compensate(video)

Types of Video Data and Rhythms

Numerical motion tracking and artifact compensation can be performed with whole organ, cell culture, and single cell preparations. The effectiveness of numerical motion compensation depends on various factors including properties of the video data and the type of rhythm:

  • Strong motion and deformation poses the most difficult data, because the tissue can move out of the field of view or deform so strongly that tracking fails. With strong motion (e.g. sinus rhythm) it is also required to use ratiometric imaging in addition to numerical motion compensation, see Tutorial 7 for more details.

  • Arrhythmias, in particular tachyarrhythmias such as ventricular fibrillation (VF), are easier to process because motion is moderate or small, see Christoph et al.[8], Christoph and Luther[2]. Ratiometric imaging, as described in Tutorial 7, is not necessarily required during arrhythmias depending on the motion and desired analysis, see also Fig. 12 in Kappadan et al.[7].

  • Measuring action potential durations (APD) (see Tutorial 8) with motion is much more challenging than measuring activation times/maps (see Tutorial 5) or conduction velocities (see Tutorial 6). Action potential duration (APD) measurements with contracting tissues require ratiometric imaging, see Tutorial 7 and Figs. 1-7 in Kappadan et al.[7].

  • Noisy video data can impede the tracking, see Lebert et al.[3].

  • A shallow depth of focus and blurring can impede the tracking.

  • Tracking tissue close to the video boundaries often fails. Keep the tissue centered and leave enough space around it during imaging.

  • The tissue needs to be illuminated as evenly as possible, see Tutorial 7 for details.

Weak Motion

Relatively little contractile motion occurs during tachyarrhythmias, such as ventricular fibrillation (VF) in isolated hearts, or in cell cultures. Such small or finite motion promises the highest success rates when trying to compensate motion artifacts using numerical motion tracking. Tutorial 1 already demonstrated numerical motion compensation with ventricular fibrillation (VF), and numerical motion artifact compensation would be similarly succesful with recordings of other rhythms (e.g. sinus or pacing) in which Blebbistatin was not 100% effective. The VF recording was obtained entirely without Blebbistatin. The dominant frequency of the vortex waves is very high (> 10Hz) and, accordingly, the contractile motion is very small. Numerical motion compensation works very well under these circumstances provided the video quality is sufficiently high. In this VF example, motion artifacts are very strong without numerical motion compensation despite the minimal motion, see below. After numerical motion compensation, it is possible to normalize the data, visualize waves, calculate phase maps, see Tutorial 9, and calculate dominant frequencies. The following lines of code load the VF example file from our website cardiacvision.ucsf.edu, perform the motion compensation and display the result:

import optimap as om

filepath = om.download_example_data("VF_Rabbit_1.npy")
video_VF = om.load_video(filepath, frames=500)
video_VF_compensated = om.motion_compensate(video_VF, contrast_kernel=5, presmooth_spatial=1, presmooth_temporal=1)
calculating flows (CPU):   0%|          | 0/500 [00:00<?, ?it/s]
calculating flows (CPU):  71%|███████▏  | 357/500 [00:20<00:08, 17.83it/s]
calculating flows (CPU): 100%|██████████| 500/500 [00:28<00:00, 17.84it/s]

om.show_videos([video_VF, video_VF_compensated], titles=["original video", "compensated"]);

The recording is from Chowdhary et al.[9] and was peformed with voltage-sensitive fluorescent dye (Di-4-ANEPPS) and a Basler acA720-520um camera at 500fps.

The pixel-wise normalized videos show action potential vortex waves in the motion compensated videos and motion artifacts in the uncompensated videos:

video_VF_compensated_norm = om.video.normalize_pixelwise_slidingwindow(video_VF_compensated, window_size=60)
video_VF_norm = om.video.normalize_pixelwise_slidingwindow(video_VF, window_size=60)
om.show_videos([video_VF_compensated_norm, video_VF_norm], titles=["compensated", "uncompensated"]);

The VF example highlights that motion artifacts can occur with very minimal motion and that numerical motion tracking can compensate these motion artifacts very effectively, see also Christoph and Luther[2], Lebert et al.[3]. A comparison of the optical traces retrieved from the uncompensated and compensated videos further demonstrates the effectiveness:

Warning

TO DO: add optical traces before and after

# plot

Strong Motion

Videos of sinus rhythm or pacing often include much stronger motion and deformation. Stronger motion and deformation is more challenging to track for several reasons. Nevertheless, it is possible to track data with strong motion, given that certain prerequisites are met. The following lines of code load an example video file of sinus rhythm from our website cardiacvision.ucsf.edu and perform motion compensation:

import optimap as om

filename = om.download_example_data("Sinus_Rabbit_1.npy")
video_sinus = om.load_video(filename)
video_sinus = om.video.rotate_left(video_sinus)
video_sinus_compensated_ref0 = om.motion_compensate(video_sinus, contrast_kernel=5, ref_frame=0)
video_sinus_compensated_ref40 = om.motion_compensate(video_sinus, contrast_kernel=5, ref_frame=40)
calculating flows (CPU):   0%|          | 0/90 [00:00<?, ?it/s]
calculating flows (CPU): 100%|██████████| 90/90 [00:04<00:00, 21.65it/s]

calculating flows (CPU):   0%|          | 0/90 [00:00<?, ?it/s]
calculating flows (CPU): 100%|██████████| 90/90 [00:04<00:00, 21.39it/s]

We actually performed the motion tracking and compensation twice: once with reference frame 0 (the first frame in the video) and once with reference frame 40 (the video contains only 90 frames). The reference frame can be specified in motion_compensate(): tracking will be performed with respect to this frame, which means that the tissue’s displacements will be calculated with respect to the tissue’s mechanical configuration in this frame, and all other video frames will be ‘warped’ onto (will look similar to) this frame using the displacement vector field data. You can see that the choice of the reference frame can influence the motion compensation:

om.show_videos([video_sinus, video_sinus_compensated_ref40, video_sinus_compensated_ref0], titles=["original video", "compensated ref 40", "compensated ref 0"]);

With reference frame 40 the compensated video is completely motion-stabilized and we could proceed with post-processing. However, with reference frame 0 the compensated video exhibits warping artifacts. This is because the motion is much larger from frame 0 all the way to the end of the video than from one of the frames halfway through the video (particularly in this example, overall this is not always an issue). With reference frame 0 the tracking algorithm was not able to establish a spatial correlation with all frames. With sinus rhythm or pacing it is usually good practice to pick a reference frame during diastoly shortly before the depolarization phase or application of a pacing stimuli, as this frame shows the tissue in a relaxed, uncontracted state (relevant when computing tissue strain or contraction). However, depending on the magnitude of translational and/or rotational motion and deformation it might be required to pay attention to whether the camera can see the tissue throughout the entire sequence of frames. Selecting a different reference frame can circumvent issues with the tracking. By contrast, during VF, it is not necessarily required to pick a reference frame, any frame is likely to work as long as the tissue does not move much. A related issue during sinus rhythm is that the heart might rotate during contraction, such that parts of the tracked tissue do not face the camera any longer. Subsequently, only frames in which the tissue is fully visible can be tracked, not the entire video sequence. The latter problem can only be addressed using multi-camera optical mapping systems as in Chowdhary et al.[9].

The original and the compensated pixel-wise normalized videos look as follows:

video_sinus_norm = om.video.normalize_pixelwise(video_sinus[:25])
video_sinus_compensated_ref40_norm = om.video.normalize_pixelwise(video_sinus_compensated_ref40[:25])
om.show_videos([video_sinus_norm, video_sinus_compensated_ref40_norm], titles=["original video", "compensated ref 40"], interval=100);

The difference between the uncompensated and compensated videos before and after motion tracking is striking. However, motion artifacts are not entirely removed with numerical motion tracking with large motion. Note that we selected only the first 25 frames during the normalization. Selecting such a short part of the video, which shows only the depolarization wave front propagating across the tissue, helps suppress additional motion artifacts. It is then possible to compute activation maps from the short compensated video, see Tutorial 5. However, if you want to measure action potential durations (APDs) in longer videos you need to combine the numerical motion compensation with ratiometric imaging, see Tutorial 7 and Tutorial 8.

The recording was peformed with voltage-sensitive fluorescent dye (Di-4-ANEPPS) and a Basler acA720-520um camera at 500fps. Experimenters: Jan Lebert, Namita Ravi & Jan Christoph (University of California, San Francisco, USA), 2022.

Customizing Motion Tracking

The success of motion tracking depends on various other factors such as noise, image contrast, blurring, illumination, etc., see Lebert et al.[3]. Internally, the motion_compensate() function executes other functions, including estimate_displacements(), contrast_enhancement(), smooth_spatiotemporal() and warp_video(), which all influence the motion tracking and compensation process. The whole process can be customized to the particular video data by using these functions separately or setting their parameters to specific values. One can provide parameters to motion_compensate() (e.g. contrast_kernel=5, etc.) or execute the functions individually one after the other. Instead of:

video_sinus_compensated = om.motion_compensate(video_sinus, presmooth_spatial=1, presmooth_temporal=1, contrast_kernel=5, ref_frame=40)

one can also use the following chain of processing steps:

video_sinus_smoothed = om.video.smooth_spatiotemporal(video_sinus, 1, 1)
video_sinus_smoothed_contrast = om.motion.contrast_enhancement(video_sinus_smoothed, 5)
displacement_vectors_sinus = om.motion.estimate_displacements(video_sinus_smoothed_contrast, 40)
video_sinus_compensated = om.motion.warp_video(video_sinus, displacement_vectors_sinus)

to retrieve the same motion compensated video.

In particular, contrast-enhancement and spatio-temporal smoothing are two very important pre-processing steps, which were shown in Christoph et al.[8] and Lebert et al.[3] to increase the robustness of the tracking:

  • Spatio-temporal smoothing

  • Contrast-enhancement

Spatio-temporal smoothing is applied to the input video (here video_sinus). It reduces noise in the input video which subsequently also reduces noise in the tracking (e.g. jitter of the displacement vectors over time). Videos with very low noise do not require this step, and, in general, the sensitivity to noise depends on the particular tracking algorithm, see Lebert et al.[3]). The pre-smoothing can be applied either by specifying presmooth_spatial=1,presmooth_temporal=1 as input parameters in motion_compensate(), or by performing smooth_spatiotemporal() with the same parameters on the original video and then passing the smoothed video to the contrast-enhancement and tracking. The filter kernel size needs to be adapted to the video data, but we recommend small kernel sizes as the smoothing is intended to only suppress high-frequency noise (resulting from the camera). Larger kernel sizes might smooth the video too aggressively, such that important features and image contrast are lost (keep in mind that features and contrast in the image are required for tracking).

Contrast-enhancement, see Christoph and Luther[2], is a critical pre-processing step for two reasons:

  • Tracking algorithms require features or image contrast to be able to track motion in a visual scene. The contrast-enhancement amplifies spatial features and image contrast and therefore facilitates the tracking.

  • Contrast-enhancement increases the robustness of the tracking and inhibits tracking artifacts. Tracking algorithms can inadvertently track the movement of action potential or calcium waves, see Fig. 11B in Christoph and Luther[2]. However, contrast-enhancement inhibits this phenomenon very effectively, because it amplifies spatial features and suppresses temporal fluctuations caused by the waves, see Figs. 11 and 10-12 in Christoph and Luther[2], Lebert et al.[3], respectively. We strongly recommend contrast-enhancement to avoid tracking artifacts: motion_compensate() performs contrast-enhancement by default (with default parameters) or, alternatively, you apply contrast-enhancement to the pre-smoothed video and afterwards track the contrast-enhanced video. The warping is then applied to the original video with the displacement data that results from tracking the contrast-enhanced video:

The contrast-enhancement involves a small disk-shaped kernel within which the pixels are normalized, see Fig. 4 in Christoph and Luther[2]. The size of this kernel needs to be specified as it needs to match the relevant features in the video image:

video_sinus_contrast3 = om.motion.contrast_enhancement(video_sinus, 3)
video_sinus_contrast5 = om.motion.contrast_enhancement(video_sinus, 5)
video_sinus_contrast9 = om.motion.contrast_enhancement(video_sinus, 9)
om.show_videos([video_sinus_contrast3, video_sinus_contrast5, video_sinus_contrast9],
               titles=["contrast kernel 3", "contrast kernel 5", "contrast kernel 9"],
               skip_frame=3);

The optimal kernel size depends on the size of the video image and the size of the releveant features in the video image in pixels. Here, the optimal kernel size is about 7-11 pixels (diameter). Smaller kernels create noisy images, which can produce noisy tracking data and degrade the quality of the motion compensation. Larger kernels blur image features, which subsequently degrades the accuracy of the tracking. The compensated videos may exhibit warping artifacts when the contrast-enhancement kernel is too small:

sinus_displacement_vectors_nocontrast = om.motion.estimate_displacements(video_sinus, 40)
sinus_displacement_vectors_contrast3 = om.motion.estimate_displacements(video_sinus_contrast3, 40)
sinus_displacement_vectors_contrast9 = om.motion.estimate_displacements(video_sinus_contrast9, 40)

video_sinus_compensated_nocontrast = om.motion.warp_video(video_sinus, sinus_displacement_vectors_nocontrast)
video_sinus_compensated_contrast3 = om.motion.warp_video(video_sinus, sinus_displacement_vectors_contrast3)
video_sinus_compensated_contrast9 = om.motion.warp_video(video_sinus, sinus_displacement_vectors_contrast9)
calculating flows (CPU):   0%|          | 0/90 [00:00<?, ?it/s]
calculating flows (CPU): 100%|██████████| 90/90 [00:04<00:00, 21.35it/s]

calculating flows (CPU):   0%|          | 0/90 [00:00<?, ?it/s]
calculating flows (CPU): 100%|██████████| 90/90 [00:04<00:00, 21.48it/s]

calculating flows (CPU):   0%|          | 0/90 [00:00<?, ?it/s]
calculating flows (CPU): 100%|██████████| 90/90 [00:04<00:00, 21.41it/s]

om.show_videos([video_sinus_compensated_nocontrast, video_sinus_compensated_contrast3, video_sinus_compensated_contrast9],
               titles=["no contrast", "contrast kernel 3", "contrast kernel 9"],
               skip_frame=1, interval=10);

The compensated video on the left without contrast-enhancement exhibits strong warping artifacts. The video at the center exhibits some residual warping artifacts, which are hard to notice but occur when the action potential wave front propagates across the image (the kernel size contrast_kernel=3 is too small). The video on the right does not exhibit any noticeable residual motion on the ventricular surface (contrast_kernel=9). The atria often exhibit residual motion, in particular when they are close to the boundaries of the video image. Note that the original video was not pre-smoothed. If the contrast-enhanced videos are too noisy, even with the optimal kernel size and with pre-smoothing the original raw video, then they should be pre-smoothed as well to minimize noise in the tracking.

When compensated videos do not exhibit any residual motion, the motion compensation was successful. Vice versa, if they do, then the tracking does not work properly. It is not always immediately obvious whether residual motion is still present in compensated videos, but confirming that they are free of residual motion is the best way to determine whether the tracking accurately tracks/follows the motion. It is best to zoom in and assess the motion visually. Please note that some video players or exported videos compress the motion in the sequence of images and might give the impression that there is residual motion when there is none. monochrome is a viewer that displays the raw video pixel by pixel without any interpolation, compression or other adulterations. Alternatively, one could verify that the tracked displacement vectors match the motion of the tissue, see below, but this is also sometimes not easy to verify. Lastly, one can check the amount of residual motion artifacts in the pixel-wise normalized optical maps, see Fig. 10 in Christoph and Luther[2].

Optical Traces: Before and After

We can compare the optical traces from before and after …

Warning

This tutorial is currently under development. We will add more information soon.

Retrieving Displacement Data

While we so far discussed only using motion tracking for motion compensation, it is also possible to use the displacement data itself to analyze the mechanical deformation and contractile activity of the tissue:

sinus_displacement_vectors = om.motion.estimate_displacements(video_sinus)

The displacement vectors resulting from the tracking can be displayed as follows:

#om.motion.

You can test all processing routines above with your own data. Simply replace filename with the name of your file (e.g. filename = 'your_video_file.rsh'), if it is located in the same folder as this script.