Skip to content

Note

Click here to download the full example code

Transformations#

This tutorial will show you how to transform and mirror neurons.

Introduction#

As of version 0.5.0, NAVis includes functions that let you transform and mirror spatial data (e.g. neurons). This new functionality splits into high- and low-level functions. In this tutorial, we will start by exploring the higher-level functions that most users will use and then take a sneak peak at the low-level functions.

At the moment, navis supports the following transform types:

  • CMTK warp transforms
  • Hdf5 deformation fields
  • Elastix transforms
  • landmark-based thin-plate spline transforms
  • affine transforms

flybrains#

Since NAVis brings the utility but does not ship with any transforms, we have to either generate those ourselves or get them elsewhere. Here, we will showcase the flybrains library that provides a number of different transforms directly to NAVis. Setting up and registering your own custom transforms will be discussed further down.

First, you need to get flybrains. Please follow the instructions to install and download the bridging registrations before you continue.

import flybrains

Importing flybrains automatically registers the transforms with NAVis. This in turn allows NAVis to plot a sequence of bridging transformations to map between any connected template spaces.

Flybrain Bridging Graph

In addition to those bridging transforms, flybrains also contains mirror registrations (we will cover those later), meta data and meshes for the template brains:

# This is the Janelia "hemibrain" template brain
print(flybrains.JRCFIB2018F)

Out:

Template brain
--------------
Name: JRCFIB2018F
Short Name: JRCFIB2018F
Type: None
Sex:  F
Dimensions: 34432 x 39552 x 41408 voxels
Voxel size:
  x = 8 nanometers
  y = 8 nanometers
  z = 8 nanometers
Bounding box (nanometers):
  x = 0, y = 0, z = 0,
  x = 275456, y = 316416, z = 331264,
Description: Nanometer-calibrated version of Janelia FIB hemibrain dataset.
DOI: https://doi.org/10.1101/2020.01.21.911859
import navis
import matplotlib.pyplot as plt

# This is the hemibrain neuropil surface mesh
fig, ax = navis.plot2d(flybrains.JRCFIB2018F, view=("x", "-z"))
plt.tight_layout()

plot 01 transforms

You can check the registered transforms like so:

navis.transforms.registry.summary()

# !!! note
#     The documentation is built in an environment with a minimal number of transforms registered. If you have installed
#     and imported `flybrains`, you should see a lot more than what is shown above.
source target transform type invertible weight
0 JRC2018F JRCFIB2018Fum <navis.transforms.h5reg.H5transform object at ... bridging True 1.0
1 JRC2018F FAFBum <navis.transforms.h5reg.H5transform object at ... bridging True 1.0
2 FAFB14 FAFB14sym <navis.transforms.thinplate.TPStransform objec... bridging True 1.0
3 FLYWIRE FLYWIREsym <navis.transforms.thinplate.TPStransform objec... bridging True 1.0
4 FAFB14 JRCFIB2022M <navis.transforms.thinplate.TPStransform objec... bridging True 1.0
5 FLYWIRE JRCFIB2022M <navis.transforms.thinplate.TPStransform objec... bridging True 1.0
6 MANC FANC <navis.transforms.thinplate.TPStransform objec... bridging True 1.0
7 FANC FANCum_fixed <navis.transforms.base.FunctionTransform objec... bridging False 1.0
8 FANCum_fixed JRCVNC2018F_reflected <navis.transforms.elastix.ElastixTransform obj... bridging False 1.0
9 JRCVNC2018F_reflected JRCVNC2018F <navis.transforms.base.FunctionTransform objec... bridging False 1.0
10 JRCVNC2018F JRCVNC2018F_reflected <navis.transforms.base.FunctionTransform objec... bridging False 1.0
11 JRCVNC2018F_reflected FANCum_fixed <navis.transforms.elastix.ElastixTransform obj... bridging False 1.0
12 FANCum_fixed FANC <navis.transforms.base.FunctionTransform objec... bridging False 1.0
13 JRCFIB2022Mraw JRCFIB2022M <navis.transforms.affine.AffineTransform objec... bridging True 0.1
14 MANCraw MANC <navis.transforms.affine.AffineTransform objec... bridging True 0.1
15 JRCFIB2018Fraw JRCFIB2018F <navis.transforms.affine.AffineTransform objec... bridging True 0.1
16 FLYWIREraw FLYWIRE <navis.transforms.affine.AffineTransform objec... bridging True 0.1
17 FAFB14raw FAFB14 <navis.transforms.affine.AffineTransform objec... bridging True 0.1
18 FANCraw FANC <navis.transforms.affine.AffineTransform objec... bridging True 0.1
19 JFRC2 JFRC2010 <navis.transforms.affine.AffineTransform objec... bridging True 0.1
20 FAFB14um FAFB14 <navis.transforms.affine.AffineTransform objec... bridging True 0.1
21 FLYWIREum FLYWIRE <navis.transforms.affine.AffineTransform objec... bridging True 0.1
22 MANCum MANC <navis.transforms.affine.AffineTransform objec... bridging True 0.1
23 JRCFIB2018Fum JRCFIB2018F <navis.transforms.affine.AffineTransform objec... bridging True 0.1
24 FANCum FANC <navis.transforms.affine.AffineTransform objec... bridging True 0.1
25 JRCFIB2022Mum JRCFIB2022M <navis.transforms.affine.AffineTransform objec... bridging True 0.1
26 JRCFIB2022M None <navis.transforms.thinplate.TPStransform objec... mirror True 1.0
27 JRCFIB2022Mraw None <navis.transforms.thinplate.TPStransform objec... mirror True 1.0
28 FANC None <navis.transforms.thinplate.TPStransform objec... mirror True 1.0
29 FLYWIRE None <navis.transforms.thinplate.TPStransform objec... mirror True 1.0
30 FAFB14 None <navis.transforms.thinplate.TPStransform objec... mirror True 1.0
31 FAFB None <navis.transforms.thinplate.TPStransform objec... mirror True 1.0
32 hemibrain JRCFIB2018F <navis.transforms.base.AliasTransform object a... bridging True 0.0
33 hemibrainraw JRCFIB2018Fraw <navis.transforms.base.AliasTransform object a... bridging True 0.0
34 hemibrainum JRCFIB2018Fum <navis.transforms.base.AliasTransform object a... bridging True 0.0
35 FAFB FAFB14 <navis.transforms.base.AliasTransform object a... bridging True 0.0
36 FAFBum FAFB14um <navis.transforms.base.AliasTransform object a... bridging True 0.0
37 FAFBnm FAFB14nm <navis.transforms.base.AliasTransform object a... bridging True 0.0
38 FANC FANCnm <navis.transforms.base.AliasTransform object a... bridging True 0.0
39 JRCFIB2022M JRCFIB2022Mnm <navis.transforms.base.AliasTransform object a... bridging True 0.0
40 MANC MANCnm <navis.transforms.base.AliasTransform object a... bridging True 0.0
41 FLYWIRE FLYWIREnm <navis.transforms.base.AliasTransform object a... bridging True 0.0

Using xform_brain#

For high-level transforming, you will want to use navis.xform_brain. This function takes a source and target argument and tries to find a bridging sequence that gets you to where you want. Let's try it out:

Info

Incidentally, the example neurons that NAVis ships with are from the Janelia hemibrain project and are therefore in JRCFIB2018raw space ("raw" means uncalibrated voxel space which is 8x8x8nm for this dataset). We will be using those but there is nothing stopping you from using the NAVis interface with neuPrint (the tutorials on interfaces) to fetch your favourite hemibrain neurons and transform those.

# Load the example hemibrain neurons (JRCFIB2018raw space)
nl = navis.example_neurons()
nl
<class 'navis.core.neuronlist.NeuronList'> containing 5 neurons (1.4MiB)
type name id n_nodes n_connectors n_branches n_leafs cable_length soma units created_at origin file
0 navis.TreeNeuron DA1_lPN_R 1734350788 4465 2705 599 618 266476.87500 4177.0 8 nanometer 2024-09-18 12:25:29.729522 /home/runner/work/navis/navis/navis/data/swc/1... 1734350788.swc
1 navis.TreeNeuron DA1_lPN_R 1734350908 4847 3042 735 761 304332.65625 6.0 8 nanometer 2024-09-18 12:25:29.737417 /home/runner/work/navis/navis/navis/data/swc/1... 1734350908.swc
... ... ... ... ... ... ... ... ... ... ... ... ... ...
3 navis.TreeNeuron DA1_lPN_R 754534424 4696 3010 696 726 286522.46875 4.0 8 nanometer 2024-09-18 12:25:29.750820 /home/runner/work/navis/navis/navis/data/swc/7... 754534424.swc
4 navis.TreeNeuron DA1_lPN_R 754538881 4881 2943 626 642 291265.31250 701.0 8 nanometer 2024-09-18 12:25:29.757718 /home/runner/work/navis/navis/navis/data/swc/7... 754538881.swc
fig, ax = navis.plot2d([nl, flybrains.JRCFIB2018Fraw], view=("x", "-z"))
plt.tight_layout()

plot 01 transforms

Let's say we want these neurons in JRC2018F template space. Before we do the actual transform it's useful to quickly check above bridging graph to see what we expect to happen:

What is JRC2018F?

JRC2018F is a standard brain made from averaging over multiple fly brains. See Bogovic et al., 2020 for details.

Have a look at the bridging graph above: first, we know that we are starting in JRCFIB2018Fraw space. From there, it's two simple affine transforms to go from voxels to nanometers and from nanometers to micrometers. Once we are in JRCFIB2018Fum space, we can use a Hdf5 transform generated by the Saalfeld lab to map to JRC2018F. Note that the arrows in the bridging graph indicate the transforms' forward directions but they can all be inversed to traverse the graph.

Let's give this a shot:

xf = navis.xform_brain(nl, source="JRCFIB2018Fraw", target="JRC2018F")

Out:

Transform path: JRCFIB2018Fraw -> JRCFIB2018F -> JRCFIB2018Fum -> JRC2018F

Painless, wasn't it? Let's see if it worked:

Plot the transformed neurons and the JRC2018F template brain

fig, ax = navis.plot2d([xf, flybrains.JRC2018F], color="r", view=("x", "-y"))
plt.tight_layout()

plot 01 transforms

That worked like a charm! I highly recommend you read through the documentation for navis.xform_brain and check out the parameters you can use to fine-tune it.

Using mirror_brain#

Another useful type of transform is mirroring using navis.mirror_brain to e.g. mirror neurons from the left to the right side of a given brain. The way this works is this:

  1. Reflect coordinates about the midpoint of the mirror axis (affine transformation)
  2. Apply a warping transformation to compensate for e.g. left/right asymmetries

For the first step, we need to know the length of the mirror axis. This is why - similar to having registered transforms - we need to have meta data about the template space (i.e. the bounding box) available to NAVis.

The second step is optional. For example, JRC2018F and JRC2018U are templates generate from averaging multiple fly brains and are therefore already mirror symmetrical, meaning we don't need the additional warping transform. flybrains does include some mirror transforms though: e.g. for FCWB, VNCIS1 or JFRC2!

Since our neurons are already in JRC2018F space, let's try mirroring them:

mirrored = navis.mirror_brain(xf, template="JRC2018F")
fig, ax = navis.plot2d(
    [xf, mirrored, flybrains.JRC2018F], color=["r"] * 5 + ["g"] * 5, view=("x", "-y")
)
plt.tight_layout()

plot 01 transforms

Out:

Plot neurons:   0%|          | 0/10 [00:00<?, ?it/s]

Plot neurons:  10%|#         | 1/10 [00:00<00:04,  1.89it/s]

Plot neurons:  20%|##        | 2/10 [00:01<00:04,  1.65it/s]

Plot neurons:  30%|###       | 3/10 [00:01<00:03,  1.76it/s]

Plot neurons:  40%|####      | 4/10 [00:02<00:03,  1.74it/s]

Plot neurons:  50%|#####     | 5/10 [00:02<00:03,  1.66it/s]

Plot neurons:  60%|######    | 6/10 [00:03<00:02,  1.71it/s]

Plot neurons:  70%|#######   | 7/10 [00:04<00:01,  1.66it/s]

Plot neurons:  80%|########  | 8/10 [00:04<00:01,  1.69it/s]

Plot neurons:  90%|######### | 9/10 [00:05<00:00,  1.68it/s]

Plot neurons: 100%|##########| 10/10 [00:05<00:00,  1.73it/s]

Perfect! As noted above, this only works if the template is registered with NAVis and if it contains info about its bounding box. If you only have the bounding box at hand but no template brain, check out the lower level function navis.transforms.mirror.

Low-level functions#

Adding your own transforms#

Let's assume you want to add your own transforms. There are four different transform types:

To show you how to use them, we will create a new thin plate spline transform using TPStransform. If you look at the bridging graph again, you might note the "FAFB14" template brain: it stands for "Full Adult Fly Brain" (the 14 is a version number for the alignment). We will use landmarks to generate a mapping between this 14th and the previous 13th iteration.

First we will grab the landmarks from the Saalfeld's lab elm repository:

import pandas as pd

# These landmarks map betweet FAFB (v14 and v13) and a light level template
# We will use only the v13 and v14 landmarks
landmarks_v14 = pd.read_csv(
    "https://github.com/saalfeldlab/elm/raw/master/lm-em-landmarks_v14.csv", header=None
)
landmarks_v13 = pd.read_csv(
    "https://github.com/saalfeldlab/elm/raw/master/lm-em-landmarks_v13.csv", header=None
)

# Name the columns
landmarks_v14.columns = landmarks_v13.columns = [
    "label",
    "use",
    "lm_x",
    "lm_y",
    "lm_z",
    "fafb_x",
    "fafb_y",
    "fafb_z",
]

landmarks_v13.head()
label use lm_x lm_y lm_z fafb_x fafb_y fafb_z
0 Pt-1 True 571.400083 38.859963 287.059544 525666.465856 172470.413167 80994.733289
1 Pt-2 True 715.811344 213.299356 217.393493 595391.597008 263523.121958 84156.773677
2 Pt-3 True 513.002196 198.001970 217.794090 501716.347872 253223.667163 98413.701578
3 Pt-6 True 867.012542 31.919253 276.223437 670999.903156 179097.916778 67561.691416
4 Pt-7 True 935.210895 234.229522 351.518068 702703.909963 251846.384054 127865.886146

Now we can use those landmarks to generate a thin plate spine transform:

from navis.transforms.thinplate import TPStransform

tr = TPStransform(
    landmarks_source=landmarks_v14[["fafb_x", "fafb_y", "fafb_z"]].values,
    landmarks_target=landmarks_v13[["fafb_x", "fafb_y", "fafb_z"]].values,
)
# note: navis.transforms.MovingLeastSquaresTransform has similar properties

The transform has a method that we can use to transform points but first we need some data in FAFB14 space:

# Transform our neurons into FAFB 14 space
xf_fafb14 = navis.xform_brain(nl, source="JRCFIB2018Fraw", target="FAFB14")

Out:

Transform path: JRCFIB2018Fraw -> JRCFIB2018F -> JRCFIB2018Fum -> JRC2018F -> FAFBum = FAFB14um -> FAFB14

Now let's see if we can use the v14v13 transform:

# Transform the nodes of the first two neurons
pts_v14 = xf_fafb14[:2].nodes[["x", "y", "z"]].values
pts_v13 = tr.xform(pts_v14)

Quick check how the v14 and v13 coordinates compare:

# Original in black, transformed in red
fig, ax = navis.plot2d(pts_v14, scatter_kws=dict(c="k"), view=("x", "-y"))
_ = navis.plot2d(pts_v13, scatter_kws=dict(c="r"), ax=ax, view=("x", "-y"))

plot 01 transforms

So that did... something. To be honest, I'm not sure what to expect for the FAFB 1413 transform but let's assume this is correct and move on.

Next, we will register this new transform with NAVis so that we can use it with higher level functions:

# Register the transform
navis.transforms.registry.register_transform(
    tr, source="FAFB14", target="FAFB13", transform_type="bridging"
)

Now that's done we can use FAFB13 with navis.xform_brain:

# Transform our neurons into FAFB 14 space
xf_fafb13 = navis.xform_brain(xf_fafb14, source="FAFB14", target="FAFB13")

Out:

Transform path: FAFB14 -> FAFB13
fig, ax = navis.plot2d(xf_fafb14, c='k', view=("x", "-y"))
_ = navis.plot2d(xf_fafb13, c='r', ax=ax)

plot 01 transforms

Registering Template Brains#

For completeness, lets also have a quick look at registering additional template brains.

Template brains are represented in navis as navis.transforms.templates.TemplateBrain and there is currently no canonical way of constructing them: you can associate as much or as little data with them as you like. However, for them to be useful they should have a name, a label and a boundingbox property.

Minimally, you could do something like this:

# Construct template brain from base class
my_brain = navis.transforms.templates.TemplateBrain(
    name="My template brain",
    label="my_brain",
    boundingbox=[[0, 100], [0, 100], [0, 100]],
)

# Register with navis
navis.transforms.registry.register_templatebrain(my_brain)

# Now you can use it with mirror_brain:
import numpy as np

pts = np.array([[10, 10, 10]])
pts_mirrored = navis.mirror_brain(pts, template="my_brain")

# Plot the points
fig, ax = plt.subplots()
ax.scatter(pts[:, 0], pts[:, 1], c="k", alpha=1, s=50, label="Original")
ax.scatter(
    pts_mirrored[:, 0], pts_mirrored[:, 1], c="r", alpha=1, s=50, label="Mirrored"
)
ax.legend()

plot 01 transforms

Out:

<matplotlib.legend.Legend object at 0x7fd5c2b1d150>

While this is a working solution, it's not very pretty: for example, my_brain does have the default docstring and no fancy string representation (e.g. for print(my_brain)). I highly recommend you take a look at how flybrains constructs and packages the templates.

Acknowledgments#

Much of the transform module is modelled after functions written by Greg Jefferis for the natverse. Likewise, flybrains is a port of data collected by Greg Jefferis for nat.flybrains and nat.jrcbrains.

Total running time of the script: ( 1 minutes 55.186 seconds)

Download Python source code: plot_01_transforms.py

Download Jupyter notebook: plot_01_transforms.ipynb

Gallery generated by mkdocs-gallery