Axon Segmentation Analysis of Whole-Brain Fluorescence Images

[ ]:
from brainlit.BrainLine.util import (
    json_to_points,
    download_subvolumes,
)
from brainlit.BrainLine.parse_ara import *
import xml.etree.ElementTree as ET
from brainlit.BrainLine.imports import *
from brainlit.BrainLine.apply_ilastik import (
    ApplyIlastik,
    ApplyIlastik_LargeImage,
    plot_results,
    examine_threshold,
    downsample_mask,
)
from brainlit.BrainLine.analyze_results import (
    AxonDistribution,
    collect_regional_segmentation,
)

%gui qt5

1. Before Using this notebook

1a. Install brainlit, and dependencies

1b. Write images to s3 using CloudReg

- e.g. python -m cloudreg.scripts.create_precomputed_volumes --s3_input_paths <path-to-stitched-images> --s3_output_paths  <s3-address-for-output>  --voxel_size <x-resolution> <y-resolution> <z-resolution> --num_procs <num-cpus> --resample_iso <boolean-to-resample-isotropically>

1c. Make point annotations in neuroglancer to identify subvolumes for validation (and possible training)

Instructions. JSON snippet:

    ,
{
"type":"pointAnnotation",
"name": "val",
"points": []
}

1d. Update axon_data.json file

This file stores information on how to access neuroglancer data.

Data should be stored in the brain2paths dictionary, with entries like:

"<sample ID>": {
    "base_local": "<Path to directory with layers with CloudVolume prependings (ending with forward slash)>",
    "base_s3": "<Path to directory with layers with CloudVolume prependings (ending with forward slash)>",
    "val_info": {
        "url": "<neuroglancer URL>",
        "somas_layer": "<name of layer with coordinates on somas>",
        "nonsomas_layer": "<name of layer with coordinates on non-somas>",
    },
    "subtype": "<subtype>"
    #Optional:
    "train_info": {
        "url": "<neuroglancer URL>",
        "somas_layer": "<name of layer with coordinates on somas>",
        "nonsomas_layer": "<name of layer with coordinates on non-somas>",
    },
    "transformed mask": "<cloudvolume path to transformed axon mask>"
}
[ ]:
brainlit_path = Path(os.path.abspath(""))
brainlit_path = brainlit_path.parents[3]
print(f"Path to brainlit: {brainlit_path}")
data_file = (
    brainlit_path / "docs" / "notebooks" / "pipelines" / "BrainLine" / "axon_data.json"
)

brain = "test"  # brain ID
axon_data_dir = (
    str(
        brainlit_path
        / "docs"
        / "notebooks"
        / "pipelines"
        / "BrainLine"
        / "validation-data"
        / "axon"
    )
    + "/"
)  # path to directory where training/validation data should be stored

# Modify base path of test sample according to your system
with open(data_file, "r") as f:
    data = json.load(f)


base = (
    "precomputed://file://"
    + str(
        brainlit_path
        / "docs"
        / "notebooks"
        / "pipelines"
        / "BrainLine"
        / "example-data"
    )
    + "/"
)
data["brain2paths"]["test"]["base_s3"] = base
data["brain2paths"]["test"]["base_local"] = base

brain2paths = data["brain2paths"]

with open(data_file, "w") as f:
    json.dump(data, f, indent=4)

Validation: Steps 2, 4-5, 7 below can be done via a script: brainlit/BrainLine/scripts/axon_validate.py

2. Download benchmark data

*Inputs*

For the test sample to work, you should be serving it via neuroglancer’s cors_webserver e.g. with the command python cors_webserver.py -d "/Users/thomasathey/Documents/mimlab/mouselight/brainlit_parent/brainlit/brainlit/BrainLine/data/example" -p 9010

[ ]:
antibody_layer = "antibody"
background_layer = "background"
endogenous_layer = "endogenous"

dataset_to_save = "val"  # train or val

layer_names = [antibody_layer, background_layer, endogenous_layer]

Download data

[ ]:
download_subvolumes(
    axon_data_dir,
    brain_id=brain,
    data_file=data_file,
    layer_names=layer_names,
    dataset_to_save=dataset_to_save,
)

3. View downloaded data (optional)

*Inputs*

[ ]:
fname = fname = (
    Path(axon_data_dir) / f"brain{brain}" / "val" / "2054_1951_1825.h5"
)  # path to file for viewing
scale = [1.8, 1.8, 2]  # voxel size in microns
[ ]:
with h5py.File(fname, "r") as f:
    pred = f.get("image_3channel")
    image_bg = pred[0, :, :, :]
    image_fg = pred[1, :, :, :]
    image_endo = pred[2, :, :, :]

viewer = napari.Viewer(ndisplay=3)
viewer.add_image(image_fg, scale=scale)
viewer.add_image(image_bg, scale=scale)
viewer.add_image(image_endo, scale=scale)
viewer.scale_bar.visible = True
viewer.scale_bar.unit = "um"

4. Apply Ilastik to validation data

You will need to do two things: - Add annotations to the downloaded data (for me, partial labels on 3 of the z-slices using ilastik) and export Labels to the same directory - Apply axon segmentation model to the downloaded data. Results should be located in the same directory at the subvolumes, with the addition of “_Probabilities” appended to the file names: you can do this programmatically (below), or you can use the ilastik GUI (which is sometimes faster)

Note: make sure foreground/background labels are matched between the model and your annotations (for me, blue/1 =axon yellow/0=bg)

[ ]:
ilastik_project = str(
    brainlit_path / Path("experiments/BrainLine/data/models/axon/axon_segmentation.ilp")
)  # path to ilastik model to be used
ilastik_path = (
    "/Applications/ilastik-1.4.0b21-OSX.app/Contents/ilastik-release/run_ilastik.sh"
)
brains = [brain]
[ ]:
applyilastik = ApplyIlastik(
    ilastik_path=ilastik_path,
    project_path=ilastik_project,
    brains_path=axon_data_dir,
    brains=brains,
)
applyilastik.process_subvols(ncpu=1)

5. Check results

[ ]:
plot_results(
    data_dir=axon_data_dir, brain_ids=[brain], positive_channel=1, object_type="axon"
)
plt.show()

If results above are not adequate improve the model and try again

In my case, I identify more subvolumes from the sample at hand using the same process as for validation data, and add it as training data to the model and retrain.

Examine best threshold

[ ]:
examine_threshold(
    data_dir=axon_data_dir,
    brain_id=brain,
    threshold=0.52,
    object_type="axon",
    positive_channel=1,
)

6. Apply ilastik to whole image:

This can be done alternatively via a script with: brainlit/BrainLine/axon_segment_image

* Inputs *

[ ]:
threshold = 0.52  # threshold to use for ilastik
data_dir = (
    axon_data_dir + "brain_temp/"
)  # directory to store temporary subvolumes for segmentation

min_coords = [2000, 1900, 1800]
max_coords = [2100, 2000, 1900]  # -1 if you want to process the whole dimension

ncpu = 1  # number of cores to use for detection
chunk_size = [100, 100, 100]
[ ]:
layer_names = [antibody_layer, background_layer, endogenous_layer]
alli = ApplyIlastik_LargeImage(
    ilastik_path=ilastik_path,
    ilastik_project=ilastik_project,
    ncpu=ncpu,
    data_file=data_file,
)
alli.apply_ilastik_parallel(
    brain_id=brain,
    layer_names=layer_names,
    threshold=threshold,
    data_dir=data_dir,
    chunk_size=chunk_size,
    min_coords=min_coords,
    max_coords=max_coords,
)
alli.collect_axon_results(brain_id=brain, ng_layer_name="antibody")
[ ]:
downsample_mask(
    brain, data_file=data_file, ncpu=1, bounds=[[2000, 1600, 1600], [2400, 2000, 2000]]
)

7. Prepare Transformed Layers

[ ]:
atlas_vol = CloudVolume(
    "precomputed://https://open-neurodata.s3.amazonaws.com/ara_2016/sagittal_10um/annotation_10um_2017"
)
for layer in [
    antibody_layer,
    background_layer,
    "axon_mask",
]:  # axon_mask is transformed into an image because nearest interpolation doesnt work well after downsampling
    layer_path = brain2paths[brain]["base"] + layer + "_transformed"
    info = CloudVolume.create_new_info(
        num_channels=1,
        layer_type="image",
        data_type="uint16",  # Channel images might be 'uint8'
        encoding="raw",  # raw, jpeg, compressed_segmentation, fpzip, kempressed
        resolution=atlas_vol.resolution,  # Voxel scaling, units are in nanometers
        voxel_offset=atlas_vol.voxel_offset,
        chunk_size=[32, 32, 32],  # units are voxels
        volume_size=atlas_vol.volume_size,  # e.g. a cubic millimeter dataset
    )
    vol_mask = CloudVolume(layer_path, info=info)
    vol_mask.commit_info()

8. Register volume and transform data to atlas space using CloudReg

8a. You need to find an initial affine alignment using cloudreg.scripts.registration.get_affine_matrix. For example:

A link to the ARA parcellation is:

precomputed://https://open-neurodata.s3.amazonaws.com/ara_2016/sagittal_10um/annotation_10um_2017

And some python commands to help with affine alignment is:

from cloudreg.scripts.registration import get_affine_matrix
get_affine_matrix([1,1,1], [15,0,0], "PIR", "RAI", 1.15, "precomputed://https://open-neurodata.s3.amazonaws.com/ara_2016/sagittal_10um/annotation_10um_2017")

8b. Run registration using cloudreg.scripts.registration. For example:

python -m cloudreg.scripts.registration -input_s3_path precomputed://s3://smartspim-precomputed-volumes/2022_11_01/8790/Ch_561 --output_s3_path precomputed://s3://smartspim-precomputed-volumes/2022_11_01/8790/atlas_to_target --atlas_s3_path https://open-neurodata.s3.amazonaws.com/ara_2016/sagittal_50um/average_50um --parcellation_s3_path https://open-neurodata.s3.amazonaws.com/ara_2016/sagittal_10um/annotation_10um_2017 --atlas_orientation PIR -orientation RAI --rotation 0 0 0 --translation 0 0 0 --fixed_scale 1.2 -log_s3_path precomputed://s3://smartspim-precomputed-volumes/2022_11_01/8790/atlas_to_target --missing_data_correction True --grid_correction False --bias_correction True --regularization 5000.0 --iterations 3000 --registration_resolution 100

8c. Transform segmentation to atlas space using CloudReg

python -m cloudreg.scripts.transform_data --target_layer_source precomputed://file:///cis/home/tathey/brainline-example/axon_mask --transformed_layer_source precomputed://file:///cis/home/tathey/brainline-example/axon_mask_transformed --affine_path "/cis/home/tathey/brainline-example/downloop_1_A.mat"  --velocity_path "/cis/home/tathey/brainline-example/downloop_1_v.mat"

This will write a layer to s3 with the transformed axon mask. The path to this layer should be added to axon_data.py under the transformed_mask key.

Analysis: Steps 9-11 below can be done alternatively via a script with: brainlit/BrainLine/scripts/axon_analyze.py

9. Combine registration and segmentation results

[ ]:
collect_regional_segmentation(
    brain_id=brain,
    data_file=data_file,
    outdir=axon_data_dir,
    max_coords=max_coords,
    min_coords=min_coords,
    ncpu=2,
)

10. View axon segmentation in brain space

*Inputs*

[ ]:
brain_ids = ["test"]

colors = {
    "test_type": "red",
}  # colors for different genotypes
fold_on = False

ontology_file = (
    brainlit_path / "brainlit" / "Brainline" / "data" / "ara_structure_ontology.json"
)
[ ]:
ad = AxonDistribution(
    brain_ids=brain_ids,
    data_file=data_file,
    ontology_file=ontology_file,
    regional_distribution_dir=axon_data_dir,
)
[ ]:
ad.napari_coronal_section(z=1080, subtype_colors=colors, fold_on=fold_on)

11. Display bar charts

*Inputs*

[ ]:
wholebrain_results_dir = ""  #

brains = [brain]  # list of sample IDs to be shown

regions = [
    512,  # cerebellum
    688,  # cerebral cortex
    698,  # olfactory areas
    1089,  # hippocampal formation
    # 583, # claustrum
    477,  # striatum
    # 803, # pallidum
    351,  # bed nuclei of stria terminalis
    # 703, #cortical subplate
    1097,  # hypothalamus
    549,  # thalamus
    186,  # lateral habenula
    519,  # cerebellar nuclei
    313,  # midbrain
    1065,  # hindbrain
]  # allen atlas region IDs to be shown
# see: https://connectivity.brain-map.org/projection/experiment/480074702?imageId=480075280&initImage=TWO_PHOTON&x=17028&y=11704&z=3

composite_regions = {
    "Amygdalar Nuclei": [131, 295, 319, 780]
}  # Custom composite allen regions where key is region name and value is list of allen regions
[ ]:
ad.region_barchart(regions, composite_regions=composite_regions, normalize_region=512)
[ ]: