Preprocessing¶
Image Processing¶
-
brainlit.preprocessing.
center
(data: np.ndarray)[source]¶ Centers data by subtracting the mean
- Parameters
data (array-like) -- data to be centered
- Returns
data_centered -- centered-data
- Return type
array-like
-
brainlit.preprocessing.
contrast_normalize
(data: np.ndarray, centered: bool = False)[source]¶ Normalizes image data to have variance of 1
- Parameters
data (array-like) -- data to be normalized
centered (boolean) -- When False (the default), centers the data first
- Returns
data -- normalized data
- Return type
array-like
-
brainlit.preprocessing.
whiten
(img: np.ndarray, window_size: np.ndarray, step_size: np.ndarray, centered: bool = False, epsilon: float = 1e-05, type: str = 'PCA')[source]¶ Performs PCA or ZCA whitening on an array. This preprocessing step is described in _[1].
- Parameters
img (array-like) -- image to be vectorized
window_size (array-like) -- window size dictating the neighborhood to be vectorized, same number of dimensions as img, based on the top-left corner
step_size (array-like) -- step size in each of direction of window, same number of dimensions as img
centered (boolean) -- When False (the default), centers the data first
epsilon (epsilon value for whitening) --
type (string) -- Determines the type of whitening. Can be either 'PCA' (default) or 'ZCA'
- Returns
data-whitened (array-like) -- whitened data
S (2D array) -- Singular value array of covariance of vectorized image
References
-
brainlit.preprocessing.
window_pad
(img: np.ndarray, window_size: np.ndarray, step_size: np.ndarray)[source]¶ Pad image at edges so the window can convolve evenly. Padding will be a copy of the edges.
- Parameters
img (array-like) -- image to be padded
window_size (array-like) -- window size that will be convolved, same number of dimensions as img
step_size (array-like) -- step size in each of direction of window convolution, same number of dimensions as img
- Returns
img_padded (array-like) -- padded image
pad_size (array-like) -- amount of padding in every direction of the image
-
brainlit.preprocessing.
undo_pad
(img: np.ndarray, pad_size: np.ndarray)[source]¶ Remove padding from edges of images
- Parameters
img (array-like) -- padded image
pad_size (array-like) -- amount of padding in every direction of the image
- Returns
img -- unpadded image
- Return type
array-like
-
brainlit.preprocessing.
vectorize_img
(img: np.ndarray, window_size: np.ndarray, step_size: np.ndarray)[source]¶ Reshapes an image by vectorizing different neighborhoods of the image.
- Parameters
img (array-like) -- image to be vectorized
window_size (array-like) -- window size dictating the neighborhood to be vectorized, same number of dimensions as img, based on the top-left corner
step_size (array-like) -- step size in each of direction of window, same number of dimensions as img
- Returns
vectorized -- vectorized image
- Return type
array-like
-
brainlit.preprocessing.
imagize_vector
(img: np.ndarray, orig_shape: tuple, window_size: np.ndarray, step_size: np.ndarray)[source]¶ Reshapes a vectorized image back to its original shape.
- Parameters
img (array-like) -- vectorized image
orig_shape (tuple) -- dimensions of original image
window_size (array-like) -- window size dictating the neighborhood to be vectorized, same number of dimensions as img, based on the top-left corner
step_size (array-like) -- step size in each of direction of window, same number of dimensions as img
- Returns
imagized -- original image
- Return type
array-like
Image Filters¶
-
brainlit.preprocessing.
gabor_filter
(input: np.ndarray, sigma: Union[float, List[float]], phi: Union[float, List[float]], frequency: float, offset: float = 0.0, output: Optional[Union[np.ndarray, np.dtype, None]] = None, mode: str = 'reflect', cval: float = 0.0, truncate: float = 4.0)[source]¶ Multidimensional Gabor filter. A gabor filter is an elementwise product between a Gaussian and a complex exponential.
- Parameters
input (array_like) -- The input array.
sigma (scalar or sequence of scalars) -- Standard deviation for Gaussian kernel. The standard deviations of the Gaussian filter are given for each axis as a sequence, or as a single number, in which case it is equal for all axes.
phi (scalar or sequence of scalars) -- Angles specifying orientation of the periodic complex exponential. If the input is n-dimensional, then phi is a sequence of length n-1. Convention follows https://en.wikipedia.org/wiki/N-sphere#Spherical_coordinates.
frequency (scalar) -- Frequency of the complex exponential. Units are revolutions/voxels.
offset (scalar) -- Phase shift of the complex exponential. Units are radians.
output (array or dtype, optional) -- The array in which to place the output, or the dtype of the returned array. By default an array of the same dtype as input will be created. Only the real component will be saved if output is an array.
mode ({‘reflect’, ‘constant’, ‘nearest’, ‘mirror’, ‘wrap’}, optional) -- The mode parameter determines how the input array is extended beyond its boundaries. Default is ‘reflect’.
cval (scalar, optional) -- Value to fill past edges of input if mode is ‘constant’. Default is 0.0.
truncate (float) -- Truncate the filter at this many standard deviations. Default is 4.0.
- Returns
real, imaginary -- Returns real and imaginary responses, arrays of same shape as input.
- Return type
arrays
Notes
The multidimensional filter is implemented by creating a gabor filter array, then using the convolve method. Also, sigma specifies the standard deviations of the Gaussian along the coordinate axes, and the Gaussian is not rotated. This is unlike skimage.filters.gabor, whose Gaussian is rotated with the complex exponential. The reasoning behind this design choice is that sigma can be more easily designed to deal with anisotropic voxels.
Examples
>>> from brainlit.preprocessing import gabor_filter >>> a = np.arange(50, step=2).reshape((5,5)) >>> a array([[ 0, 2, 4, 6, 8], [10, 12, 14, 16, 18], [20, 22, 24, 26, 28], [30, 32, 34, 36, 38], [40, 42, 44, 46, 48]]) >>> gabor_filter(a, sigma=1, phi=[0.0], frequency=0.1) (array([[ 3, 5, 6, 8, 9], [ 9, 10, 12, 13, 14], [16, 18, 19, 21, 22], [24, 25, 27, 28, 30], [29, 30, 32, 34, 35]]), array([[ 0, 0, -1, 0, 0], [ 0, 0, -1, 0, 0], [ 0, 0, -1, 0, 0], [ 0, 0, -1, 0, 0], [ 0, 0, -1, 0, 0]]))
>>> from scipy import misc >>> import matplotlib.pyplot as plt >>> fig = plt.figure() >>> plt.gray() # show the filtered result in grayscale >>> ax1 = fig.add_subplot(121) # left side >>> ax2 = fig.add_subplot(122) # right side >>> ascent = misc.ascent() >>> result = gabor_filter(ascent, sigma=5, phi=[0.0], frequency=0.1) >>> ax1.imshow(ascent) >>> ax2.imshow(result[0]) >>> plt.show()
Segmentation Processing¶
-
brainlit.preprocessing.
getLargestCC
(segmentation: np.ndarray)[source]¶ Returns the largest connected component of a image.
Arguments: segmentation : Segmentation data of image or volume.
Returns: largeCC : Segmentation with only largest connected component.
-
brainlit.preprocessing.
removeSmallCCs
(segmentation: np.ndarray, size: Union[int, float], verbose=False)[source]¶ Removes small connected components from an image.
Parameters: segmentation : Segmentation data of image or volume. size : Maximum connected component size to remove.
Returns: largeCCs : Segmentation with small connected components removed.
-
brainlit.preprocessing.
label_points
(labels: np.array, points: list, res: list)[source]¶ Adjust points so they fall on a foreground component of labels.
-
brainlit.preprocessing.
compute_frags
(soma_coords: list, labels: np.array, im_processed: np.array, threshold: float, res: list, chunk_size: list = None, ncpu: int = 2)[source]¶ Preprocesses a neuron image segmentation by splitting up non-soma components into 5 micron segments.
- Parameters
soma_coords (list) -- list of voxel coordinates of somas
labels (np.array) -- image segmentation
im_processed (np.array) -- voxel-wise probability predictions for foreground
threshold (float) -- threshold used to segment probability predictions into mask
res (list) -- voxel size in image
chunk_size (list) -- size of image chunks
ncpu (int) -- number of cpus to use in parallel mode
- Returns
new image segmentation - different numbers indicate different fragments, 0 is background
- Return type
np.array
-
brainlit.preprocessing.
remove_somas
(soma_coords: list, labels: np.array, im_processed: np.array, res: list, verbose=False)[source]¶ Helper function of split_frags. Removes area around somas.
- Parameters
- Returns
probability predictions, with the soma regions masked list: coordinates of the points dictionary: map from component in labels, to set of points that were placed there list: masks of the different somas
- Return type
np.array
-
brainlit.preprocessing.
rename_states_consecutively
(new_labels: np.array)[source]¶ Helper function of split_frags. Relabel components in image segmentation so the unique values are consecutive.
- Parameters
new_labels (np.array) -- new image segmentation - different numbers indicate different fragments, 0 is background
- Returns
new image segmentation - different numbers indicate different fragments, 0 is background
- Return type
np.array