Skip to content

Note

Click here to download the full example code

The MICrONS Datasets#

In this tutorial we will explore the MICrONS datasets.

The Allen Institute for Brain Science in collaboration with Princeton University, and Baylor College of Medicine released two large connecotmics dataset:

  1. A "Cortical mm3" of mouse visual cortex. This one is broken into two portions: "65" and "35"
  2. A smaller "Layer 2/3" dataset of mouse visual cortex.

All of these can be browsed via the MICrONS Explorer using neuroglancer. These data are public and thanks to the excellent cloud-volume and caveclient libraries, developed by William Silversmith, Forrest Collman, Sven Dorkenwald, Casey Schneider-Mizell and others, we can easily fetch neurons and their connectivity.

For easier interaction, NAVis ships with a small interface to these datasets. To use it, we will have to make sure caveclient (and with it cloud-volume) is installed:

pip install caveclient cloud-volume -U

The first time you run below code, you might have to get and set a client secret. Simply follow the instructions in the terminal and when in doubt, check out the section about authentication in the caveclient docs.

Let's get started:

import navis
import navis.interfaces.microns as mi

You will find that most functions in the interface accept a datastack parameter. At the time of writing, the available stacks are:

  • cortex65 (also called "minnie65") is the anterior portion of the cortical mm3 dataset
  • cortex35 (also called "minnie35") is the (smaller) posterior portion of the cortical mm3 dataset
  • layer 2/3 (also called "pinky") is the earlier, smaller cortical dataset

If not specified, the default is cortex65. Both cortex65 and cortex35 always map to the most recent version of that dataset. You can use get_datastacks to see all available datastacks:

mi.get_datastacks()

Out:

['minnie35_public_v0', 'minnie65_public_v117', 'minnie65_public_v343', 'minnie65_public_v661', 'pinky_sandbox', 'minnie65_sandbox', 'minnie65_public']

Let's start with some basic queries using the caveclient directly:

# Initialize the client for the 65 part of cortical mm^3 (i.e. "Minnie")
client = mi.get_cave_client(datastack="cortex65")

# Fetch available annotation tables
client.materialize.get_tables()

Out:

['nucleus_alternative_points', 'allen_column_mtypes_v2', 'bodor_pt_cells', 'aibs_metamodel_mtypes_v661_v2', 'allen_v1_column_types_slanted_ref', 'aibs_column_nonneuronal_ref', 'nucleus_ref_neuron_svm', 'apl_functional_coreg_vess_fwd', 'vortex_compartment_targets', 'baylor_log_reg_cell_type_coarse_v1', 'functional_properties_v3_bcm', 'l5et_column', 'pt_synapse_targets', 'coregistration_auto_phase3_fwd_apl_vess_combined', 'coregistration_manual_v4', 'vortex_manual_myelination_v0', 'synapses_pni_2', 'nucleus_detection_v0', 'vortex_manual_nodes_of_ranvier', 'vortex_astrocyte_proofreading_status', 'bodor_pt_target_proofread', 'nucleus_functional_area_assignment', 'coregistration_auto_phase3_fwd', 'synapse_target_structure', 'proofreading_status_and_strategy', 'aibs_metamodel_celltypes_v661']

These are the available public tables which we can use to fetch meta data. Let's check out baylor_log_reg_cell_type_coarse_v1. Note that there is also a baylor_gnn_cell_type_fine_model_v2 table which contains more detailed cell types.

# Get cell type table
ct = client.materialize.query_table("baylor_log_reg_cell_type_coarse_v1")
ct.head()
id created valid target_id classification_system cell_type id_ref created_ref valid_ref volume pt_supervoxel_id pt_root_id pt_position bb_start_position bb_end_position
0 25718 2023-03-22 18:05:52.744496+00:00 t 17115 baylor_log_reg_cell_type_coarse inhibitory 17115 2020-09-28 22:41:18.237823+00:00 t 268.646482 75934403318291307 864691135635239593 [80992, 109360, 15101] [nan, nan, nan] [nan, nan, nan]
1 25581 2023-03-22 18:05:52.650844+00:00 t 17816 baylor_log_reg_cell_type_coarse inhibitory 17816 2020-09-28 22:42:54.932823+00:00 t 264.795587 75090047309035210 864691135618175635 [74880, 110032, 16883] [nan, nan, nan] [nan, nan, nan]
2 5033 2023-03-22 18:04:23.575096+00:00 t 18023 baylor_log_reg_cell_type_coarse inhibitory 18023 2020-09-28 22:43:00.306675+00:00 t 264.791327 75934266147628505 864691135207734905 [81008, 108240, 16995] [nan, nan, nan] [nan, nan, nan]
3 32294 2023-03-22 18:06:11.872068+00:00 t 18312 baylor_log_reg_cell_type_coarse inhibitory 18312 2020-09-28 22:44:09.407821+00:00 t 221.584753 75441272688753483 864691135758479438 [77392, 105280, 17650] [nan, nan, nan] [nan, nan, nan]
4 2693 2023-03-22 18:04:21.985021+00:00 t 255686 baylor_log_reg_cell_type_coarse excitatory 255686 2020-09-28 22:40:42.632533+00:00 t 297.846047 88954888800920543 864691135568539372 [175760, 126480, 15504] [nan, nan, nan] [nan, nan, nan]
ct.cell_type.value_counts()

Out:

cell_type
excitatory    49208
inhibitory     5855
Name: count, dtype: int64

Important

Not all neurons in the dataset have been proofread. In theory, you can check if a neuron has been proofread using the corresponding annotation table:

table = client.materialize.query_table('proofreading_status_public_release')#
fully_proofread = table[
      table.status_dendrite.isin(['extented', 'clean']) &
      table.status_axon.isin(['extented', 'clean'])
  ].pt_root_id.values
However, it appears that the proofreading status table may be outdated at the moment.

Let's fetch one of the excitatory neurons:

n = mi.fetch_neurons(
    ct[ct.cell_type == "excitatory"].pt_root_id.values[0], with_synapses=False
)[0]
n
type navis.MeshNeuron
name None
id 864691135568539372
units 1 nanometer
n_vertices 1508981
n_faces 3034247

Neuron IDs

The neuron IDs in MICrONS are called "root IDs" because they represent collections of supervoxels - or rather hierarchical layers of chunks of which the lowest layer are supervoxel IDs.

MICrONS neurons can be fairly large, i.e. have lots of faces. You can try using using a higher lod ("level of detail", higher = coarser) but not all datastacks actually support multi-resolution meshes. If they don't (like this one) the lod parameter is silently ignored.

For visualization in this documentation we will simplify the neuron a little. For this, you need either open3d (pip3 install open3d), pymeshlab (pip3 install pymeshlab) or Blender 3D on your computer.

# Reduce face counts to 1/3 of the original
n_ds = navis.simplify_mesh(n, F=1 / 3)

# Inspect (note the lower face/vertex counts)
n_ds
type navis.MeshNeuron
name None
id 864691135568539372
units 1 nanometer
n_vertices 501988
n_faces 1011415

Plot the downsample neuron (again: the downsampling is mostly for the sake of this documentation)

navis.plot3d(
    n_ds,
    radius=False,
    color="r",
    legend=False,  # hide the legend (more space for the plot)
)

Nice! Now let's run a bit of analysis.

Sholl Analysis#

Sholl analysis is a simple way to quantify the complexity of a neuron's arbor. It counts the number of intersections a neuron's arbor makes with concentric spheres around a center (typically the soma). The number of intersections is then plotted against the radius of the spheres.

import numpy as np

# The neuron mesh will automatically be skeletonized for this analysis
# Note: were defining radii from 0 to 160 microns in 5 micron steps
sha = navis.sholl_analysis(n, center="soma", radii=np.arange(0, 160_000, 5_000))

Out:

/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/site-packages/skeletor/skeletonize/wave.py:198: DeprecationWarning:

Graph.clusters() is deprecated; use Graph.connected_components() instead

/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/site-packages/skeletor/skeletonize/wave.py:228: DeprecationWarning:

Graph.shortest_paths() is deprecated; use Graph.distances() instead

Plot the results

import matplotlib.pyplot as plt

fig, ax = plt.subplots(figsize=(10, 5))

sha.intersections.plot(c="r")

ax.set_xlabel("radius [nm]")
ax.set_ylabel("# of intersections")
ax.patch.set_color((0, 0, 0, 0))  # Make background transparent
fig.patch.set_color((0, 0, 0, 0))

plt.tight_layout()

tutorial remote 02 microns

See navis.sholl_analysis for ways to fine tune the analysis. Last but not least a quick visualization with the neuron:

from matplotlib.colors import Normalize
from matplotlib.cm import ScalarMappable
from mpl_toolkits.axes_grid1 import make_axes_locatable

# Plot one of the excitatory neurons
fig, ax = navis.plot2d(n, view=("x", "y"), figsize=(10, 10), c="k", method="2d")

cmap = plt.get_cmap("viridis")

# Plot Sholl circles and color by number of intersections
center = n.soma_pos
# Drop the outer Sholl circles where there are no intersections
norm = Normalize(vmin=0, vmax=(sha.intersections.max() + 1))
for r in sha.index.values:
    ints = sha.loc[r, "intersections"]
    ints_norm = norm(ints)
    color = cmap(ints_norm)

    c = plt.Circle(center[:2], r, ec=color, fc="none")
    ax.add_patch(c)

# Add colorbar
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
_ = plt.colorbar(
    ScalarMappable(norm=norm, cmap=cmap), cax=cax, label="# of intersections"
)

tutorial remote 02 microns

Render Videos#

Beautiful data like the MICrONS datasets lend themselves to visualizations. For making high quality videos (and renderings) I recommend you check out the tutorial on navis' Blender interface. Here's a little taster:

Total running time of the script: ( 1 minutes 22.734 seconds)

Download Python source code: tutorial_remote_02_microns.py

Download Jupyter notebook: tutorial_remote_02_microns.ipynb

Gallery generated by mkdocs-gallery