Tip

Download this tutorial as a Jupyter notebook, or a python script with code cells.

Tutorial 7: Ratiometry¶

Combining Numerical Motion Compensation and Ratiometry¶

In this tutorial, we will discuss the analysis of ratiometric optical mapping data and the benefits of combining numerical motion tracking with ratiometric imaging. Ratiometry can help suppress motion artifacts, as it exploits that some fluorescent indicators, such as Di-4-ANEPPS or Di-4-ANBDQPQ, respond differently to different excitation wavelengths and/or produce different signals with different emission filters. Knisley et al. [2000] and Bachtel et al. [2011] demonstrated ratiometric voltage-sensitive optical mapping with Di-4-ANEPPS and showed that it is possible to effectively inhibit motion artifacts by separating signal from non-signal components. With the right emission filter, Di-4-ANEPPS produces positive and negative signals at different excitation wavelengths/channels (e.g. blue vs. green), while non-signal components appear equally in both wavelengths/channels, see [Kappadan et al.[1], Bachtel et al.[2], Zhang et al.[3]]. Non-signal components can be removed by calculating the ratio between the two signals. The non-signal components are usually caused by motion. Ratiometry is ideally combined with numerical motion compensation, because either technique alone cannot fully compensate motion artifacts, see also Tutorials 1 and 4. Numerical motion tracking and compensation does not remove physical motion between the tissue and the illumination, it just removes the motion virtually. The motion can cause illumination changes (e.g. the heart moves back and forth between two LEDs), which persist in the motion-stabilized videos and modulate and distort the signal that we aim to measure. Vice versa, ratiometry alone does not extract the signal in a co-moving frame of reference which is essential when aiming to inhibit motion artifacts. However, taken together, these two techniques can measure the signal in a co-moving frame of reference and compensate the non-stationary illumination in the motion-stabilized videos.

In the example below, the heart was stained with Di-4-ANEPPS and imaged using alternating green and blue illumination at 500fps. More specifically, the tissue was illuminated in every odd frame with green light and in every even frame with blue light, respectively. This approach is also referred to as ‘excitation ratiometry’. After numerical motion tracking and stabilization, the green and blue videos are divided by each other to obtain a motion-stabilized ratiometric video. Using this method, both motion artifacts as well as illumination artifacts (which also lead to motion artifacts) can be significantly reduced.

First, we load the video data, which is stored as a single file (Ratiometry_1.npy) containing the two green and blue videos in an interleaved fashion (frame 1 = green, frame 2 = blue, frame 3 = green, etc.). Using optimap’s load_video() function, we can specify to load only every 2nd frame and to start reading from the first or the second frame, respectively:

import optimap as om

filename = om.download_example_data("Ratiometry_1.npy")
video_blue = om.load_video(filename, start_frame=0, step=2)
video_green = om.load_video(filename, start_frame=1, step=2)

om.print_properties(video_blue)
om.print_properties(video_green)
Downloading data from 'https://cardiacvision.ucsf.edu/sites/g/files/tkssra6821/f/optimap-Ratiometry_1.npy_.webm' to file '/home/runner/work/optimap/optimap/docs/tutorials/optimap_example_data/Ratiometry_1.npy'.
------------------------------------------------------------------------------------------
array with dimensions: (600, 128, 128)
datatype of array: uint16
minimum value in entire array: 606
maximum value in entire array: 29340
------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------
array with dimensions: (600, 128, 128)
datatype of array: uint16
minimum value in entire array: 688
maximum value in entire array: 30556
------------------------------------------------------------------------------------------

Alternatively, one could load the video and manually split the video into the green and blue videos (video_green = video[::2,:,:] etc.). Both videos contain ‘uint16’ intensity values in each pixel, which means that they can have whole unsigned integer values between 0 and 65535. The videos dimensions are 128 x 128 pixels and each video contains 600 frames. The original interleaved video contains 1200 frames. The green video is only slightly brighter based on the maximum value.

When we play the two videos next to each other we notice that there is signal in only the green video but not in the blue video:

om.show_video_pair(video_green, video_blue,
                   title1="green video with signal",
                   title2="blue video without signal");

If we plot and compare two optical traces from the green and blue videos respectively, we can see not in all but in most areas stronger intensity fluctuations in the green video:

om.trace.set_default_trace_window('disc') # the trace is sample from a disc-shaped region with a diameter specified by 'size'
om.compare_traces([video_green, video_blue],
                  labels=['green','blue'],
                  colors=['green','blue'])

Keep in mind that the motion in both videos is not compensated yet and produces strong motion artifacts. In the green video, the intensity fluctuations are caused in parts by motion and in parts by the optical signal related to the action potential. In the blue video, nearly all fluctuations are caused by motion. Let’s look at a few specific examples. Above, the function compare_traces() was started in interactive mode because we did not provide specific coordinates as arguments. We can also provide specific coordinates, e.g. the pixel at (50,50) or a list of pixels, as follows:

positions = [(77, 57), (72, 74), (65, 42), (80, 75), (90, 90)]
om.compare_traces([video_green, video_blue],
                  positions,
                  labels=['green','blue'],
                  colors=['green', 'blue'],
                  fps=250)
../../_images/0cd36c5693a12ae5d968ece5778c8a5842720287efa353713e152ed97ae2826a.png

In the some of these examples, the green optical trace shows steep downstrokes which is typical for the upstroke of the action potential, whereas the blue trace does not exhibit such downstrokes and exhibits much weaker and inconsistent intensity fluctuations. These fluctuations are largely caused by motion, whereas the green trace has a signal component that is caused by the fluorescent dye. You can best explore this behavior using the interactive mode. We can visualize from where these traces were sampled as follows:

om.show_positions(positions, video_green[0])
../../_images/612f3c0df918b21bd94122250cbed8778dc5d0881a61322553a1779152f33e47.png
<Axes: >

Now let’s perform numerical motion tracking and motion-stabilization with the green and blue videos:

video_warped_green = om.motion_compensate(video_green, contrast_kernel=5, ref_frame=0)
video_warped_blue = om.motion_compensate(video_blue, contrast_kernel=5, ref_frame=0)
om.show_videos([video_green, video_warped_green, video_blue, video_warped_blue],
               titles=["video green", "stabilized video green", "video blue", "stabilized video blue"],
               figsize=(8, 3));
calculating flows (CPU):   0%|          | 0/600 [00:00<?, ?it/s]
calculating flows (CPU): 100%|██████████| 600/600 [00:04<00:00, 134.60it/s]

calculating flows (CPU):   0%|          | 0/600 [00:00<?, ?it/s]
calculating flows (CPU): 100%|██████████| 600/600 [00:04<00:00, 134.70it/s]

In the comparison above, you can see that both the green and blue videos were succesfully tracked and warped (motion-stabilized). There is no residual motion after warping and there are no tracking artifacts. Please refer to Christoph and Luther [2018], Lebert et al. [2022], Kappadan et al. [2020] for details. Internally, the function motion_compensate() computed a contrast-enhanced video, registered / tracked and stabilized the motion / warped the original video. Now let’s look at how the green and blue traces have changed after the motion-stabilization:

om.compare_traces([video_green, video_warped_green],
                  positions,
                  labels=['green','warped green'],
                  size=3,
                  colors=['lightgreen', 'green'])
../../_images/0a4b67806939aa94b05807b0c4eb0915dd07c9058e16e00fe3317b326138d076.png

We plotted the motion-stabilized green traces slightly darker than the original green trace. In this example, the motion-stabilized traces are more consistent, exhibit less variability and less deflections in the repolarization phase of the action potential. In one example, the action potential shape was not visible at all before motion-stabilization, but becomes visible after motion-stabilization. The blue traces change as follows:

om.compare_traces([video_blue, video_warped_blue],
                  positions,
                  labels=['blue','warped blue'],
                  size=3,
                  colors=['lightblue', 'blue'])
../../_images/78786571515b4df9f5c80f5cb8e0e116fb88c6c497231bac0b363e5934c91d54.png

We plotted the motion-stabilized blue traces in dark blue and the original traces in light/grey blue. Here the changes are not obvious or systematic, because we are largely looking at motion artifacts. If anything then the traces become more regular. You can explore these changes yourself using the interactive mode of the compare_traces() function:

om.compare_traces([video_green, video_warped_green],
                  labels=['green','warped green'],
                  size=3,
                  colors=['lightgreen', 'green'],
                  fps=250)

Now we can combine the green and blue motion-stabilized videos to obtain a motion-stabilized ratiometric video:

video_warped_ratio = video_warped_green/video_warped_blue
om.show_videos([video_green, video_warped_green, video_warped_blue, video_warped_ratio],
               titles=["video green", "stabilized video green", "stabilized video blue", "stabilized video ratio"],
               figsize=(8, 3));

To be able to compare the motion-stabilized green traces with the corresponding motion-stabilized ratiometric traces, we need to renormalize the videos:

video_warped_green_norm = om.video.normalize_pixelwise_slidingwindow(video_warped_green, window_size=100)
video_warped_ratio_norm = om.video.normalize_pixelwise_slidingwindow(video_warped_ratio, window_size=100)

#norm_raw = om.video.normalize_pixelwise_slidingwindow(video, window_size=60)

Now if we compare the motion-stabilized green traces with the corresponding motion-stabilized ratiometric traces we can see slight improvements of the signal quality:

om.compare_traces([video_warped_green_norm[:200], video_warped_ratio_norm[:200]],
                  positions,
                  labels=['warped green','ratio'],
                  size=3,
                  colors=['g', 'k'])
../../_images/952358e1a70e6321de4056b02e8a03574741ddeb448de74c40f2a766a236bc3c.png

The action potential duration (APD) is more heterogeneous in non-ratiometric motion-stabilized videos.

TO DO: use flow field from blue channle to compensate the green channel