Preprocessing

Image Processing

brainlit.preprocessing.center(data)[source]

Centers data by subtracting the mean

Parameters

data (array-like) -- data to be centered

Returns

data_centered -- centered-data

Return type

array-like

brainlit.preprocessing.contrast_normalize(data, centered=False)[source]

Normalizes image data to have variance of 1

Parameters
  • data (array-like) -- data to be normalized

  • centered (boolean) -- When False (the default), centers the data first

Returns

data -- normalized data

Return type

array-like

brainlit.preprocessing.whiten(img, window_size, step_size, centered=False, epsilon=1e-05, type='PCA')[source]

Performs PCA or ZCA whitening on an array. This preprocessing step is described in _[1].

Parameters
  • img (array-like) -- image to be vectorized

  • window_size (array-like) -- window size dictating the neighborhood to be vectorized, same number of dimensions as img, based on the top-left corner

  • step_size (array-like) -- step size in each of direction of window, same number of dimensions as img

  • centered (boolean) -- When False (the default), centers the data first

  • epsilon (epsilon value for whitening) --

  • type (string) -- Determines the type of whitening. Can be either 'PCA' (default) or 'ZCA'

Returns

  • data-whitened (array-like) -- whitened data

  • S (2D array) -- Singular value array of covariance of vectorized image

References

1

http://ufldl.stanford.edu/tutorial/unsupervised/PCAWhitening/

brainlit.preprocessing.window_pad(img, window_size, step_size)[source]

Pad image at edges so the window can convolve evenly. Padding will be a copy of the edges.

Parameters
  • img (array-like) -- image to be padded

  • window_size (array-like) -- window size that will be convolved, same number of dimensions as img

  • step_size (array-like) -- step size in each of direction of window convolution, same number of dimensions as img

Returns

  • img_padded (array-like) -- padded image

  • pad_size (array-like) -- amount of padding in every direction of the image

brainlit.preprocessing.undo_pad(img, pad_size)[source]

Remove padding from edges of images

Parameters
  • img (array-like) -- padded image

  • pad_size (array-like) -- amount of padding in every direction of the image

Returns

img -- unpadded image

Return type

array-like

brainlit.preprocessing.vectorize_img(img, window_size, step_size)[source]

Reshapes an image by vectorizing different neighborhoods of the image.

Parameters
  • img (array-like) -- image to be vectorized

  • window_size (array-like) -- window size dictating the neighborhood to be vectorized, same number of dimensions as img, based on the top-left corner

  • step_size (array-like) -- step size in each of direction of window, same number of dimensions as img

Returns

vectorized -- vectorized image

Return type

array-like

brainlit.preprocessing.imagize_vector(img, orig_shape, window_size, step_size)[source]

Reshapes a vectorized image back to its original shape.

Parameters
  • img (array-like) -- vectorized image

  • orig_shape (tuple) -- dimensions of original image

  • window_size (array-like) -- window size dictating the neighborhood to be vectorized, same number of dimensions as img, based on the top-left corner

  • step_size (array-like) -- step size in each of direction of window, same number of dimensions as img

Returns

imagized -- original image

Return type

array-like

Image Filters

brainlit.preprocessing.gabor_filter(input: np.ndarray, sigma: Union[float, List[float]], phi: Union[float, List[float]], frequency: float, offset: float = 0.0, output: Optional[Union[np.ndarray, np.dtype, None]] = None, mode: str = 'reflect', cval: float = 0.0, truncate: float = 4.0)[source]

Multidimensional Gabor filter. A gabor filter is an elementwise product between a Gaussian and a complex exponential.

Parameters
  • input (array_like) -- The input array.

  • sigma (scalar or sequence of scalars) -- Standard deviation for Gaussian kernel. The standard deviations of the Gaussian filter are given for each axis as a sequence, or as a single number, in which case it is equal for all axes.

  • phi (scalar or sequence of scalars) -- Angles specifying orientation of the periodic complex exponential. If the input is n-dimensional, then phi is a sequence of length n-1. Convention follows https://en.wikipedia.org/wiki/N-sphere#Spherical_coordinates.

  • frequency (scalar) -- Frequency of the complex exponential. Units are revolutions/voxels.

  • offset (scalar) -- Phase shift of the complex exponential. Units are radians.

  • output (array or dtype, optional) -- The array in which to place the output, or the dtype of the returned array. By default an array of the same dtype as input will be created. Only the real component will be saved if output is an array.

  • mode ({‘reflect’, ‘constant’, ‘nearest’, ‘mirror’, ‘wrap’}, optional) -- The mode parameter determines how the input array is extended beyond its boundaries. Default is ‘reflect’.

  • cval (scalar, optional) -- Value to fill past edges of input if mode is ‘constant’. Default is 0.0.

  • truncate (float) -- Truncate the filter at this many standard deviations. Default is 4.0.

Returns

real, imaginary -- Returns real and imaginary responses, arrays of same shape as input.

Return type

arrays

Notes

The multidimensional filter is implemented by creating a gabor filter array, then using the convolve method. Also, sigma specifies the standard deviations of the Gaussian along the coordinate axes, and the Gaussian is not rotated. This is unlike skimage.filters.gabor, whose Gaussian is rotated with the complex exponential. The reasoning behind this design choice is that sigma can be more easily designed to deal with anisotropic voxels.

Examples

>>> from brainlit.preprocessing import gabor_filter
>>> a = np.arange(50, step=2).reshape((5,5))
>>> a
array([[ 0,  2,  4,  6,  8],
       [10, 12, 14, 16, 18],
       [20, 22, 24, 26, 28],
       [30, 32, 34, 36, 38],
       [40, 42, 44, 46, 48]])
>>> gabor_filter(a, sigma=1, phi=[0.0], frequency=0.1)
(array([[ 3,  5,  6,  8,  9],
        [ 9, 10, 12, 13, 14],
        [16, 18, 19, 21, 22],
        [24, 25, 27, 28, 30],
        [29, 30, 32, 34, 35]]),
 array([[ 0,  0, -1,  0,  0],
        [ 0,  0, -1,  0,  0],
        [ 0,  0, -1,  0,  0],
        [ 0,  0, -1,  0,  0],
        [ 0,  0, -1,  0,  0]]))
>>> from scipy import misc
>>> import matplotlib.pyplot as plt
>>> fig = plt.figure()
>>> plt.gray()  # show the filtered result in grayscale
>>> ax1 = fig.add_subplot(121)  # left side
>>> ax2 = fig.add_subplot(122)  # right side
>>> ascent = misc.ascent()
>>> result = gabor_filter(ascent, sigma=5, phi=[0.0], frequency=0.1)
>>> ax1.imshow(ascent)
>>> ax2.imshow(result[0])
>>> plt.show()
brainlit.preprocessing.getLargestCC(segmentation: np.ndarray)[source]

Returns the largest connected component of a image.

Arguments: segmentation : Segmentation data of image or volume.

Returns: largeCC : Segmentation with only largest connected component.

brainlit.preprocessing.removeSmallCCs(segmentation: np.ndarray, size: Union[int, float])[source]

Removes small connected components from an image.

Parameters: segmentation : Segmentation data of image or volume. size : Maximum connected component size to remove.

Returns: largeCCs : Segmentation with small connected components removed.