Tip

Download this tutorial as a Jupyter notebook, or a python script with code cells. We highly recommend using Visual Studio Code to execute this tutorial.

Tutorial 5: Activation Maps¶

This tutorial demonstrates how to compute local activation times and activation maps from cardiac optical mapping data using optimap. Local activation times (often referred to as LATs) are times at which the tissue becomes electrically activated. Computing local activation times corresponds to determining when the optical signal in a given pixel passes a certain pre-defined threshold or intensity value. For instance, if the optical trace is normalized and fluctuates betwen [0,1] then the tissue could be defined as being ‘electrically activated’ when the time-series rises above or below 0.5 (depending on the fluorescent indicator and polarity of the signal).

First, we load and preprocess an example dataset in which a planar action potential wave propagates across the ventricles of a rabbit heart during pacing:

import optimap as om
import numpy as np
import matplotlib.pyplot as plt

filename = om.download_example_data("Sinus_Rabbit_1.npy")
video = om.load_video(filename, frames=20)
video = om.video.rotate_left(video)

# motion compensation and normalization
video_warped = om.motion.motion_compensate(video, 5, ref_frame=0)
video_warped_norm = om.video.normalize_pixelwise(video_warped)
calculating flows (CPU):   0%|          | 0/20 [00:00<?, ?it/s]
calculating flows (CPU): 100%|██████████| 20/20 [00:00<00:00, 21.30it/s]

We use optimap’s background_mask() function to blanck out the background in the image, such that the activation map is only computed for pixels showing tissue.

# remove background by masking
mask = om.background_mask(video_warped[0], show=False)
mask = om.image.dilate_mask(mask, iterations=2)
om.image.show_mask(mask, video_warped[0], title="Background mask")
video_warped_norm[:, mask] = np.nan
Creating mask with detected threshold 0.01978021978021978
../../_images/e8986c045131e63f28bc10bc18065aa8cd6bbd9bd5256b38b48beab03f78b412.png ../../_images/906fdcca77ed248e962392ebd8445f4ca252400f55d191fe731fc068bb084915.png

Because the rabbit heart was stained with the voltage-sensitive dye Di-4-ANEPPS, the tissue becomes darker when it depolarizes (negative signal / polarity):

om.show_video_pair(video, video_warped_norm, title1="original video",
                   title2="warped, normalized video", interval=100)

Let’s plot some of the video frames as the wave propagates across the ventricles:

figure, axs = plt.subplots(1, 6, figsize=(10, 3))
axs[0].imshow(video[0], cmap='gray')
axs[0].set_title('original')
axs[0].set_axis_off()

for i in range(1, 6):
    axs[i].imshow(video_warped_norm[i*3], cmap='gray', vmin=0, vmax=1)
    axs[i].set_axis_off()
    time = (i*3) * (1000/500)  # 500 fps recording, show time in ms
    axs[i].set_title(f"{time:.0f} ms")
plt.axis('off')
plt.show()
../../_images/b791c95e39898aa88bb5b209d983c6d687a94cdd6eba6b3e80cf7cf785ca91da.png

We can now compute an activation map by identifying the local activation times in each pixel that correspond to when the action potential wave front passes through that pixel.

Computing Activation Maps from Pixel-wise Normalized Optical Maps¶

We will first compute an activation map with a pixel-wise normalized video. The pixel-wise normalized video contains values between 0 and 1:

om.print_properties(video_warped_norm)
------------------------------------------------------------------------------------------
array with dimensions: (20, 320, 320)
datatype of array: float32
minimum value in entire array: 0.0
maximum value in entire array: 0.9999989867210388
number of NaNs in array: 1020420
------------------------------------------------------------------------------------------

A pixel-wise normalization was sufficient as opposed to a sliding-window pixel-wise normalization, see Tutorial 2, because we isolated a short part of the video that is only 20 frames long. In other cases it might be necessary to use a sliding-window pixel-wise normalization or a frame-wise difference video (e.g. with motion), see below.

Let’s plot some of the optical traces (manually selected so that they show locations which become subsequently activated):

positions =  [(227, 181), (199, 162), (213, 171), (240, 189), (176, 146), (188, 153)]
fig, axs = plt.subplots(1, 2, figsize=(10,5))
om.trace.show_positions(positions, video[0], ax=axs[0])
traces = om.extract_traces(video_warped_norm,
                           positions,
                           fps=500,
                           ax=axs[1])
axs[1].axhline(y=0.5, color='r', linestyle='dashed', label='threshold')
axs[1].text(0.030, 0.52, 'threshold', color='r')
plt.show()
../../_images/16576b4f50fba24f30b6b542a802f592e4b950ed5a7e3e19496f4f9327c08c8b.png

We can use optimap’s compute_activation_map() function to automatically compute a two-dimensional activation map which shows the local activation times in every pixel:

activation_map = om.compute_activation_map(video_warped_norm, threshold=0.5,
                                           inverted=True, fps=500)
../../_images/6d41fe3a3d2fd7512856226f42a91157b0f17f2f7529f4e0e03ee3e77be47138.png

Note that we used the argument inverted=True due to the negative polarity of the signal (\(- \Delta F / F\)). If me had manually inverted the video beforehand or with calcium imaging data this would not be necessary. The range of local activation times can be displayed with:

om.print_properties(activation_map)
------------------------------------------------------------------------------------------
array with dimensions: (320, 320)
datatype of array: float32
minimum value in entire array: 0.0
maximum value in entire array: 36.0
number of NaNs in array: 51021
------------------------------------------------------------------------------------------

In this case, the local activation times are given in milliseconds (based on argument fps=500) and they range between 0ms and 36ms. The function compute_activation_map() uses show_image() to plot the activation map (which can be disabled with argument show=False):

om.show_image(activation_map, cmap="jet", title='Activation Map',
              show_colorbar=True, colorbar_title='Activation Time [ms]')
../../_images/6d41fe3a3d2fd7512856226f42a91157b0f17f2f7529f4e0e03ee3e77be47138.png
<Axes: title={'center': 'Activation Map'}>

We have plotted the activation map using the jet colormap, here are some other options:

fig, axs = plt.subplots(1, 3, figsize=(8, 3))
om.show_image(activation_map, cmap='jet', show_colorbar=True, title='cmap=jet', ax=axs[0])
om.show_image(activation_map, cmap='magma', show_colorbar=True, title='cmap=hsv', ax=axs[1])
om.show_image(activation_map, cmap='twilight_shifted', show_colorbar=True, title='cmap=twilight_shifted', ax=axs[2])
plt.suptitle('Activation maps with different colormaps')
plt.show()
../../_images/9f6c550d89d49bf06e03cc0a49a7c910cb40b9232239ba07f256cc76649fb67f.png

Computing Activation Maps from Frame-Wise Difference Optical Maps¶

In Tutorial 2, we introduced the frame-wise difference method to emphasize sudden temporal changes in a video. Sudden temporal changes are caused by upstrokes of the action potential or calcium transients and the frame-wise difference filter is therefore ideally suited to visualize wavefronts as they propagate across the tissue.

video_diff = om.video.temporal_difference(video_warped, 5)
video_diff[:, mask] = np.nan
video_diff_norm = om.video.normalize_pixelwise(video_diff)

The frame-wise difference approach enhances action potential upstroke, see the following video with temporal difference in the middle and our previous pixel-wise normalized video on the right:

om.show_videos([video, video_diff_norm, video_warped_norm],
               titles=["original", "warped, frame-wise diff", "warped, normalized"],
               interval=100)

Let’s visualize the wavefront as an overlay over the raw (motion-stabilized) video. We will need to further post-process the data as follows:

video_diff[video_diff > 0] = 0
video_diff_norm = om.video.normalize_pixelwise(-video_diff)

The action potential upstroke overlaid onto the raw video:

om.video.show_video_overlay(video_warped,
                            overlay=video_diff_norm,
                            vmin_overlay=-1,
                            vmax_overlay=1)

We will now compute an activation map from the frame-wise difference video:

Warning

This tutorial is currently work in progress. We will add more information soon.