Algorithms

Fragment Generation

brainlit.algorithms.generate_fragments.get_seed(voxel)[source]

Get a seed point for the center of a brain volume.

Parameters

voxel (tuple:) -- The seed coordinates in x y z.

Returns

A tuple containing the (x, y, z)-coordinates of the seed.

Return type

tuple

brainlit.algorithms.generate_fragments.get_img_T1(img)[source]

Converts a volume cutout to a SimpleITK image, as wel as a SimpleITK image with scaled intensity values to 0-255.

Parameters

img (cloudvolume.volumecutout.VolumeCutout) -- The volume to convert to a SimpleITK image.

Returns

  • img_T1 (SimpleITK.SimpleITK.Image) -- A SimpleITK image.

  • img_T1_255 (SimpleITK.SimpleITK.Image) -- A SimpleITK image with intensity values between 0 and 255 inclusive.

brainlit.algorithms.generate_fragments.thres_from_gmm(img, random_seed=2)[source]

Computes a numerical threshold for segmentation based on a 2-Component Gaussian mixture model.

The threshold is the minimum value included in the Gaussian mixture model-component containning the highest intensity value.

Parameters
  • img (cloudvolume.volumecutout.VolumeCutout) -- The image or volume to threshold.

  • random_seed (int) -- The random seed for the Gaussian mixture model.

Returns

The threshold value.

Return type

int

brainlit.algorithms.generate_fragments.fast_marching_seg(img, seed, stopping_value=150, sigma=0.5)[source]

Computes a fast-marching segmentation.

Parameters
  • img (cloudvolume.volumecutout.VolumeCutout) -- The volume to segment.

  • seed (tuple) -- The seed containing a coordinate within a known segment.

  • stopping_value (float) -- The algorithm stops when the value of the smallest trial point is greater than this stopping value.

  • sigma (float) -- Sigma used in computing the feature image.

Returns

labels -- An array consisting of the pixelwise segmentation.

Return type

numpy.ndarray

brainlit.algorithms.generate_fragments.level_set_seg(img, seed, lower_threshold=None, upper_threshold=None, factor=2, max_rms_error=0.02, num_iter=1000, curvature_scaling=0.5, propagation_scaling=1)[source]

Computes a level-set segmentation.

When root mean squared change in the level set function for an iteration is below the threshold, or the maximum number of iteration have elapsed, the algorithm is said to have converged.

Parameters
  • img (cloudvolume.volumecutout.VolumeCutout) -- The volume to segment.

  • seed (tuple) -- The seed containing a coordinate within a known segment.

  • lower_threshold (float) -- The lower threshold for segmentation. Set based on image statistics if None.

  • upper_threshold (float) -- The upper threshold for segmentation. Set based on image statistics if None.

  • factor (float) -- The scaling factor on the standard deviation used in computing thresholds from image statistics.

  • max_rms_error (float) -- Root mean squared convergence criterion threshold.

  • num_iter (int) -- Maximum number of iterations.

  • curvature_scaling (float) -- Curvature scaling for the segmentation.

  • propagation_scaling (float) -- Propagation scaling for the segmentation.

Returns

labels -- An array consisting of the pixelwise segmentation.

Return type

numpy.ndarray

brainlit.algorithms.generate_fragments.connected_threshold(img, seed, lower_threshold=None, upper_threshold=255)[source]

Compute a threshold-based segmentation via connected region growing.

Labelled pixels are connected to a seed and lie within a range of values.

Parameters
  • img (cloudvolume.volumecutout.VolumeCutout) -- The volume to segment.

  • seed (tuple) -- The seed containing a coordinate within a known segment.

  • lower_threshold (float) -- The lower threshold for the region growth. Set by a 2-component Gaussian mixture model if None.

  • upper_threshold (float) -- The upper threshold for the region growth.

Returns

labels -- An array consisting of the pixelwise segmentation.

Return type

numpy.ndarray

brainlit.algorithms.generate_fragments.confidence_connected_threshold(img, seed, num_iter=1, multiplier=1, initial_neighborhood_radius=1, replace_value=1)[source]

Compute a threshold-based segmentation via confidence-connected region growing.

The segmentation is based on pixels with intensities that are consistent with pixel statistics of a seed point. Pixels connected to the seed point with values within a confidence interval are grouped. The confidence interval is the mean plus of minus the "multiplier" times the standard deviation. After an initial segmentation is completed, the mean and standard deviation are calculated again at each iteration using pixels in the previous segmentation.

Parameters
  • img (cloudvolume.volumecutout.VolumeCutout) -- The volume to segment.

  • seed (tuple) -- The seed containing a coordinate within a known segment.

  • num_iter (int) -- The number of iterations to run the algorithm.

  • multiplier (float) -- Multiplier for the confidence interval.

  • initial_neighborhood_radius (int) -- The initial neighborhood radius for computing statistics on the seed pixel.

  • replace_value (int) -- The value to replace thresholded pixels.

Returns

labels -- An array consisting of the pixelwise segmentation.

Return type

numpy.ndarray

brainlit.algorithms.generate_fragments.neighborhood_connected_threshold(img, seed, lower_threshold=None, upper_threshold=255)[source]

Compute a threshold-based segmentation via neighborhood-connected region growing.

Labelled pixels are connected to a seed and lie within a neighborhood.

Parameters
  • img (cloudvolume.volumecutout.VolumeCutout) -- The volume to segment.

  • seed (tuple) -- The seed containing a coordinate within a known segment.

  • lower_threshold (float) -- The lower threshold for the region growth. Set by a 2-component Gaussian mixture model if None.

  • upper_threshold (float) -- The upper threshold for the region growth.

Returns

labels -- An array consisting of the pixelwise segmentation.

Return type

numpy.ndarray

brainlit.algorithms.generate_fragments.otsu(img, seed)[source]

Compute a threshold-based segmentation via Otsu's method.

Parameters

img (cloudvolume.volumecutout.VolumeCutout) -- The volume to segment.

Returns

labels -- An array consisting of the pixelwise segmentation.

Return type

numpy.ndarray

brainlit.algorithms.generate_fragments.gmm_seg(img, seed, random_seed=3)[source]

Compute a threshold-based segmentation via a 2-component Gaussian mixture model.

Parameters
  • img (cloudvolume.volumecutout.VolumeCutout) -- The volume to segment.

  • random_seed (int) -- The random seed for the Gaussian mixture model.

Returns

labels -- An array consisting of the pixelwise segmentation.

Return type

numpy.ndarray

Connect Fragments

class brainlit.algorithms.connect_fragments.viterbi_algorithm(image, labels, soma_labels, resolution=[1, 1, 1])[source]

Connects fragments using the viterbi algorithm dynamic programming approach

Parameters
  • image -- Intensity data, numpy or python array of shape [x,y,channels]

  • labels -- Label data, numpy or python array of shape [x,y,1] where the 3rd channel is labels: 0 for background and 1...N for fragments

  • soma_labels -- Dictionary of soma labels and their corresponding coordinates in the imagespace

  • resolution -- Scaling factor along each dimension for anisotropic images, numpy vector or python array of 1x3

num_components

total number of fragment objects

image

Image data array

labels

Label data array

somas

Dictionary of soma labels and their locations in imagespace

cost_mat_dist

Distance cost matrix, initialized with -1

cost_mat_int

Intensity cost matrix, initialized with -1

resolution

Resolution scaling along each dimension

connection_mat

Connection matrix of a "from" point to a "to" point

end_points

Endpoints dictionary, labels as keys and 2 corresponding coordinates as values

not_lines

Dictionary of blob-like objects

sigma

Hard-coded variance for converting certain values for a distribution

compute_all_dists(self)[source]

Fills distance matrix for each component-component relationship :param None:

compute_bounds(self, label, pad)[source]

compute coordinates of bounding box around a masked object, with given padding :param label: mask of the object :type label: np.array :param pad: padding around object in um :type pad: float

Returns

integer coordinates of bounding box

Return type

[ints]

frags_to_lines_le_skel(self, nonline_labels=[])[source]

Relies on the assumption that self.labels has values as if it came from measure.label

line_blob_dist(self, lbl1, lbl2)[source]

compute distance between line-like and blob-like object based on closest endpoint of line to closest point on blob boundary :param lbl1: [line component] :type lbl1: [type] :param lbl2: [soma/blob component] :type lbl2: [type]

line_line_dist(self, lbl1, lbl2)[source]

compute distance between line-like objects based on closest endpoints :param lbl1: [non-soma component] :type lbl1: [type] :param lbl2: [non-soma component] :type lbl2: [type]

path_cost(self, prev_state, state, somas)[source]

compute the cost of traversing to a state :param prev_state: The label corresponding to the state we're coming from :type prev_state: int :param state: The label corresponding to the state we're going to :type state: int

Returns

The cost of traversing from prev_state to state

Return type

total_cost (float)

viterbi_frag(self, start_lbl, K, somas)[source]

Run Viterbi algorithm on image that has been masked into connected components. :param start_lbl: starting component :type start_lbl: int :param K: number of iterations :type K: int :param somas: list of components that are cell bodies :type somas: list

Returns

best path of length K starting at start_lbl sort_paths [(cost,{path})]: all paths of length K starting at start_lbl ordered by cost

Return type

top_path (cost,{path})

viterbi_frag_next_layer(self, paths_k, somas)[source]

Query the next layer of paths, up to K layers :param paths_k {state: (cost, path)}: dict of optimal paths of length k to reach each state :param somas: list of components that are cell bodies :type somas: list

Returns

(cost, path)}: dict of paths of length k+1 to reach each state closest_state (int): state to traverse to with lowest cost

Return type

paths_k1 {state

Trace Analysis

brainlit.algorithms.trace_analysis.speed(x: np.ndarray, t: np.ndarray, c: np.ndarray, k: np.integer, aux_outputs: bool = False)[source]

Compute the speed of a B-Spline.

The speed is the norm of the first derivative of the B-Spline.

Parameters
  • x -- A 1xL array of parameter values where to evaluate the curve. It contains the parameter values where the speed of the B-Spline will be evaluated. It is required to be non-empty, one-dimensional, and real-valued.

  • t -- A 1xm array representing the knots of the B-spline. It is required to be a non-empty, non-decreasing, and one-dimensional sequence of real-valued elements. For a B-Spline of degree k, at least 2k + 1 knots are required.

  • c -- A dxn array representing the coefficients/control points of the B-spline. Given n real-valued, d-dimensional points ::math::x_k = (x_k(1),...,x_k(d)), c is the non-empty matrix which columns are ::math::x_1^T,...,x_N^T. For a B-Spline of order k, n cannot be less than m-k-1.

  • k -- A non-negative integer representing the degree of the B-spline.

Returns

A 1xL array containing the speed of the B-Spline evaluated at x

Return type

speed

References: .. [Rbbccb9c002d5-1] Kouba, Parametric Equations.

brainlit.algorithms.trace_analysis.curvature(x: np.ndarray, t: np.ndarray, c: np.ndarray, k: np.integer, aux_outputs: bool = False)[source]

Compute the curvature of a B-Spline.

The curvature measures the failure of a curve, r(u), to be a straight line. It is defined as

\[k = \lVert \frac{dT}{ds} \rVert,\]

where T is the unit tangent vector, and s is the arc length:

\[T = \frac{dr}{ds},\quad s = \int_0^t \lVert r'(u) \rVert du,\]

where r(u) is the position vector as a function of time.

The curvature can also be computed as

\[k = \lVert r'(t) \times r''(t)\rVert / \lVert r'(t) \rVert^3.\]
Parameters
  • x -- A 1xL array of parameter values where to evaluate the curve. It contains the parameter values where the curvature of the B-Spline will be evaluated. It is required to be non-empty, one-dimensional, and real-valued.

  • t -- A 1xm array representing the knots of the B-spline. It is required to be a non-empty, non-decreasing, and one-dimensional sequence of real-valued elements. For a B-Spline of degree k, at least 2k + 1 knots are required.

  • c -- A dxn array representing the coefficients/control points of the B-spline. Given n real-valued, d-dimensional points ::math::x_k = (x_k(1),...,x_k(d)), c is the non-empty matrix which columns are ::math::x_1^T,...,x_N^T. For a B-Spline of order k, n cannot be less than m-k-1.

  • k -- A non-negative integer representing the degree of the B-spline.

Returns

A 1xL array containing the curvature of the B-Spline evaluated at x

Return type

curvature

References: .. [Rce97e449f49f-1] Máté Attila, The Frenet–Serret formulas.

brainlit.algorithms.trace_analysis.torsion(x: np.ndarray, t: np.ndarray, c: np.ndarray, k: np.integer, aux_outputs: bool = False)[source]

Compute the torsion of a B-Spline.

The torsion measures the failure of a curve, r(u), to be planar. If the curvature k of a curve is not zero, then the torsion is defined as

\[\tau = -n \cdot b',\]

where n is the principal normal vector, and b' the derivative w.r.t. the arc length s of the binormal vector.

The torsion can also be computed as

\[\tau = \lvert r'(t), r''(t), r'''(t) \rvert / \lVert r'(t) \times r''(t) \rVert^2,\]

where r(u) is the position vector as a function of time.

Parameters
  • x -- A 1xL array of parameter values where to evaluate the curve. It contains the parameter values where the torsion of the B-Spline will be evaluated. It is required to be non-empty, one-dimensional, and real-valued.

  • t -- A 1xm array representing the knots of the B-spline. It is required to be a non-empty, non-decreasing, and one-dimensional sequence of real-valued elements. For a B-Spline of degree k, at least 2k + 1 knots are required.

  • c -- A dxn array representing the coefficients/control points of the B-spline. Given n real-valued, d-dimensional points ::math::x_k = (x_k(1),...,x_k(d)), c is the non-empty matrix which columns are ::math::x_1^T,...,x_N^T. For a B-Spline of order k, n cannot be less than m-k-1.

  • k -- A non-negative integer representing the degree of the B-spline.

Returns

A 1xL array containing the torsion of the B-Spline evaluated at x

Return type

torsion

References: .. [R8b689f5f8f91-1] Máté Attila, The Frenet–Serret formulas.

class brainlit.algorithms.trace_analysis.GeometricGraph(df=None)[source]

The shape of the neurons are expressed and fitted with splines in this undirected graph class.

The geometry of the neurons are projected on undirected graphs, based on which the trees of neurons consisted for splines is constructed. It is required that each node has a loc attribute identifying that points location in space, and the location should be defined in 3-dimensional cartesian coordinates. It extends nx.Graph and rejects duplicate node input.

fit_spline_tree_invariant(self)[source]

Construct a spline tree based on the path lengths.

Raises
  • ValueError -- check if every node is unigue in location

  • ValueError -- check if every node is assigned to at least one edge

  • ValueError -- check if the graph contains undirected cycle(s)

  • ValueErorr -- check if the graph has disconnected segment(s)

Returns

nx.DiGraph a parent tree with the longest path in the directed graph

Return type

spline_tree

Soma Detection

brainlit.algorithms.detect_somas.find_somas(volume: np.ndarray, res: list)[source]

Find bright neuron somas in an input volume.

This simple soma detector assumes that somas are brighter than the rest of the objects contained in the input volume.

To detect somas, these steps are performed:

  1. Check input volume shape. This detector requires the x and y dimensions of the input volumes to be larger than 20 pixels.

  2. Zoom volume. We found that this simple soma detector works best when then input volume has size 160 x 160 x 50. We use ndimage.zoom to scale the input volume size to the desired shape.

  3. Binarize volume. We use Otsu thresholding to binarize the image.

  4. Erode the binarized image. We erode the binarized image with a structuring element which size is directly proportional to the maximum zoom factor applied to the input volume.

  5. Remove unreasonable connected components. After erosion, we compute the equivalent diameter d of each connected component, and only keep those ones such that 5mu m leq d < 21 mu m

  6. Find relative centroids. Finally, we compute the centroids of the remaining connected components. The centroids are in voxel units, relative to the input volume.

Parameters
  • volume (numpy.ndarray) -- The 3D image array to run the detector on.

  • res (list) -- A 1 x 3 list containing the resolution of each voxel in nm.

Returns

  • label (bool) -- A boolean value indicating whether the detector found any somas in the input volume.

  • rel_centroids (numpy.ndarray) -- A N x 3 array containing the relative voxel positions of the detected somas.

  • out (numpy.ndarray) -- A 160 x 160 x 50 array containing the detection mask.

Examples

>>> # download a volume
>>> dir = "s3://open-neurodata/brainlit/brain1"
>>> dir_segments = "s3://open-neurodata/brainlit/brain1_segments"
>>> volume_keys = "4807349.0_3827990.0_2922565.75_4907349.0_3927990.0_3022565.75"
>>> mip = 1
>>> ngl_sess = NeuroglancerSession(
>>>     mip=mip, url=dir, url_segments=dir_segments, use_https=False
>>> )
>>> res = ngl_sess.cv_segments.scales[ngl_sess.mip]["resolution"]
>>> volume_coords = np.array(os.path.basename(volume_keys).split("_")).astype(float)
>>> volume_vox_min = np.round(np.divide(volume_coords[:3], res)).astype(int)
>>> volume_vox_max = np.round(np.divide(volume_coords[3:], res)).astype(int)
>>> bbox = Bbox(volume_vox_min, volume_vox_max)
>>> img = ngl_sess.pull_bounds_img(bbox)
>>> # apply soma detector
>>> label, rel_centroids, out = find_somas(img, res)