Skip to content

Note

Click here to download the full example code

Skeletons from light-level data#

This tutorial will show you how to extract skeletons from confocal microscopy stacks.

This example is not executed

In contrast to almost all other tutorials, this one is not executed when the documentation is built. Consequently, it also does not display any actual code output or plots - images shown are statically embedded. The main reason for this is that the example requires downloading a large-ish file which is a pain in the neck to get to work in the CI enviroment.

Extracting neuron skeletons from microscopy data is a common but non-trivial task. There are about as many ways to do this as there are people doing it - from fully manual to fully automated tracing.

In this tutorial, we will show you a fully automated way using a number of easy-to-install Python packages. If this isn't for you, check out the Alternatives section at the end of this tutorial.

Requirements:#

Please make sure you have the following packages installed:

  • pynrrd to load image stacks
    pip install pynrrd -U
    
  • connected-components-3d (cc3d) to label connected components
    pip install connected-components-3d -U
    
  • kimimaro to extract the skeletons
    pip install kimimaro -U
    

Preparing the data#

The pipeline we're using here was designed for pre-segmented data, so there is little in the way of dealing with noisy data. Fortunately, the image stack we will use is exceptionally clean which makes the skeletonization process very straightforward.

In practice, you may have to do some pre-processing to clean up your data before running the skeletonization. If your run-of-the-mill thresholding, denoising, etc. doesn't cut it, you can also try more advanced segmentation techniques.

There are various fairly easy-to-use tools available for this, e.g. Ilastik (see the pixel classification and voxel segmentation tutorials) or DeepImageJ.

Download Image Stack#

As example data, we will use a confocal stack from the Janelia Split-Gal4 collection. We picked the SS00731 line because it's already fairly clean as is and there are high-resolution stacks with stochastic multi-color labeling of individual neurons available for download.

Scroll all the way to the bottom of the page and in the dropdown for the left-most image, select "Download H5J stack: Unaligned".

download

Convert to NRRD#

Next, we need to open this file in Fiji/ImageJ to convert it to a format we can work with in Python:

  1. Fire up Fiji/ImageJ
  2. Drag & drop the SS00731-20140620_20_C5-f-63x-ventral-Split_GAL4-unaligned_stack.h5j file into Fiji
  3. Go to "Image" -> "Colors" -> "Split Channels" to split the image into the channels
  4. Discard all but the red "C1" channel with our neurons
  5. Go to "Image" -> "Type" -> "8-bit" to convert the image to 8-bit (optional but recommended)
  6. Save via "File" -> "Save As" -> "NRRD" and save the file as neuron.nrrd

Z stack

Extracting the Skeleton#

Now that we have that file in a format we can load it into Python, we can get started:

import kimimaro
import nrrd
import navis
import cc3d
import numpy as np

First load the image stack:

# `im` is numpy array, `header` is a dictionary
im, header = nrrd.read(
    "neuron.nrrd"
)

Next, we need to find some sensible threshold to binarize the image. This is not strictly necessary (see the further note down) but at least for starters this more intuitive.

# Threshold the image
mask = (im >= 20).astype(np.uint8)

You can inspect the mask to see if the thresholding worked as expected:

import matplotlib.pyplot as plt
plt.imshow(mask.max(axis=2))

With the octarine backend, you can also visualize the volume in 3D:

# spacing can be found in the `header` dictionary
import octarine as oc
v = oc.Viewer()
v.add_volume(mask, spacing=(.19, .19, .38))

mask

A couple notes on the thresholding:

  • feel free to test the thresholding in e.g. ImageJ/Fiji
  • remove as much background as possible without disconnecting neurites
  • perfection is the enemy of progress: we can denoise/reconnect during postprocessing

Next, we we need to label the connected components in the image:

Extract the labels

labels, N = cc3d.connected_components(mask, return_N=True)

Visualize the labels:

import cmap
import octarine as oc
v = oc.Viewer()
v.add_volume(labels, spacing=(.19, .19, .38), color=cmap.Colormap('prism'))

labels

Experiment

cc3d.connected_component also works with non-thresholded image - see the delta parameter.

# Collect some statistics
stats = cc3d.statistics(labels)

print("Total no. of labeled componenents:", N)
print("Per-label voxel counts:", np.sort(stats["voxel_counts"])[::-1])
print("Label IDs:", np.argsort(stats["voxel_counts"])[::-1])
Total no. of labeled componenents: 37836
Per-label voxel counts: [491996140    527374    207632 ...         1         1         1]
Label IDs: [    0  6423  6091 ... 22350 22351 18918]

Note how label 0 has suspiciously many voxels? That's because this is the background label. We need to make sure to exlude it from the skeletonization process:

to_skeletonize = np.arange(1, N)

Now we can run the actual skeletonization!

Skeletonization paramters

There are a number of parameters that are worth explaining first because you might want to tweak them for your data:

  • scale & const: control how detailed your skeleton will be: lower = more detailed but more noise
  • anisotropy: controls the voxel size - see the header dictionary for the voxel size of our image
  • dust_threshold: controls how small connected components are skipped
  • object_ids: a list of labels to process (remember that we skipped the background label)
  • max_path: if this is set, the algorithm will only process N paths in each skeleton - you can use this to finish early (e.g. for testing)

See the kimimaro repository for a detailed explanation of the parameters!

skels = kimimaro.skeletonize(
    labels,
    teasar_params={
        "scale": 1.5,
        "const": 1,  # physical units (1 micron in our case)
        "pdrf_scale": 100000,
        "pdrf_exponent": 4,
        "soma_acceptance_threshold": 3.5,  # physical units
        "soma_detection_threshold": 1,  # physical units
        "soma_invalidation_const": 0.5,  # physical units
        "soma_invalidation_scale": 2,
        "max_paths": None,  # default None
    },
    object_ids=list(to_skeletonize), # process only the specified labels
    dust_threshold=500,  # skip connected components with fewer than this many voxels
    anisotropy=(0.19, .19, 0.38),  # voxel size in physical units
    progress=True,  # show progress bar
    parallel=6,  # <= 0 all cpu, 1 single process, 2+ multiprocess
    parallel_chunk_size=1,  # how many skeletons to process before updating progress bar
)

skels is a dictionary of {label: cloudvolume.Skeleton}. Let's convert these to NAVis neurons:

# Convert skeletons to NAVis neurons
nl = navis.NeuronList([navis.read_swc(s.to_swc(), id=i) for i, s in skels.items()])

Based on the voxel sizes in stats, we can make an educated guess that label 6423 is one of our neurons. Let's visualize it in 3D:

import octarine as oc
v = oc.Viewer()
v.add_neurons(nl.idx[6423], color='r', linewidth=2, radius=False))
v.add_volume(im, spacing=(.19, .19, .38), opacity=.5)

stack animation

This looks pretty good off the bat! Now obviously we will have the other large neuron (label 6091) plus bunch of smaller skeletons in our NeuronList. Let's have a look at those as well:

all skeletons

Zooming in on 6091 you will see that it wasn't fully skeletonized: some of the branches are missing and others are disconnected. That's either because our threshold for the mask was too high (this neuron had a weaker signal than the other) and/or we dropped too many fragments during the skeletonization process (see the dust_threshold parameter).

zoom in

Alternatives#

If the pipeline described in this tutorial does not work for you, there are a number of alternatives:

  1. Simple Neurite Tracer is a popular ImageJ plugin for semi-automated tracing
  2. Folks at the Allen Institute for Brain Science have published a protocol for reconstructing neurons
  3. NeuTube is an open-source software for reconstructing neurongs from fluorescence microscopy images

Acknowledgements#

The packages we used here were written by the excellent Will Silversmith from the Seung lab in Princeton. The image stack we processed is from the Janelia Split-Gal4 collection and was published as part of the Cheong, Eichler, Stuerner, et al. (2024) paper.

Total running time of the script: ( 0 minutes 0.000 seconds)

Download Python source code: zzz_tutorial_io_05_skeletonize.py

Download Jupyter notebook: zzz_tutorial_io_05_skeletonize.ipynb

Gallery generated by mkdocs-gallery