{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Automatic and manual segmentation pipeline" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import brainlit\n", "from brainlit.utils.session import NeuroglancerSession\n", "from brainlit.utils.Neuron_trace import NeuronTrace\n", "from brainlit.algorithms.generate_fragments import adaptive_thresh\n", "import napari\n", "from napari.utils import nbscreenshot\n", "\n", "%gui qt5" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Find valid segments\n", "In this cell, we set up a NeuroglancerSession object. Since segmentation ID numbers are not in order, we print out a list of valid IDs in some range `id_range`. Most segment IDs are in `range(300)`, additionally, segments `999` and `1000` are available." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\"\"\"\n", "# Optional: Print the IDs of segments in Neuroglancer\n", "url = \"s3://open-neurodata/brainlit/brain1\"\n", "ngl_skel = NeuroglancerSession(url+\"_segments\", mip=1,use_https=False)\n", "working_ids = []\n", "id_range = 14\n", "for seg_id in range(id_range): \n", " try:\n", " segment = ngl_skel.cv.skeleton.get(seg_id)\n", " working_ids.append(seg_id)\n", " except:\n", " pass\n", "print(working_ids)\n", "\"\"\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Download SWC information\n", "Download the information contained in a SWC file for labelled vertices of a given `seg_id` at a valid `mip` from AWS." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\"\"\"\n", "seg_id = 13\n", "mip = 2\n", "s3_trace = NeuronTrace(url+\"_segments\", seg_id, mip)\n", "df = s3_trace.get_df()\n", "df['sample'].size # the number of vertex IDs [1, 2, ..., df['sample'].size]\n", "\"\"\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\"\"\"\n", "print(df)\n", "\"\"\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Select vertices\n", "Select a subset of the vertices in the downloaded SWC to view and segment." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\"\"\"\n", "subneuron_df = df[0:5] # choose vertices to use for the subneuron\n", "vertex_list = subneuron_df['sample'].array \n", "print(vertex_list)\n", "\"\"\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Download the Volume\n", "Download the volume containing the specified vertices." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\"\"\"\n", "ngl = NeuroglancerSession(url, mip=mip)\n", "buffer = 10\n", "img, bounds, vox_in_img_list = ngl.pull_vertex_list(seg_id, vertex_list, buffer = buffer, expand = True)\n", "\"\"\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Plot" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\"\"\"\n", "# Reference: https://github.com/NeuroDataDesign/mouselit/blob/master/bijan/mouse_test/final%20notebook.ipynb\n", "def napari_viewer(img, labels=None, shapes=None, label_name=\"Segmentation\"):\n", " viewer = napari.view_image(np.squeeze(np.array(img)))\n", " if labels is not None:\n", " viewer.add_labels(labels, name=label_name)\n", " if shapes is not None:\n", " viewer.add_shapes(data=shapes, shape_type='path', edge_color='blue', name='Skeleton')\n", " return viewer\n", "\"\"\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's take a look at the downloaded volume. Napari will open in a new window." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "\"\"\"\n", "viewer = napari.Viewer(ndisplay=3)\n", "viewer.add_image(img)\n", "nbscreenshot(viewer)\n", "\"\"\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\"\"\"\n", "n=napari_viewer(img)\n", "\"\"\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "\"\"\"\n", "import inspect\n", "a = repr(n)\n", "print(a)\n", "\n", "b = repr(n).find(('napari.viewer.Viewer'))\n", "print(b)\n", "\"\"\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\"\"\"\n", "n.window.close()\n", "\"\"\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "# We get a `corrected_subneuron_df` that contains `(x,y,z)` coordinates within the downloaded volume for the vertices in the SWC." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "\"\"\"\n", "import inspect\n", "a = repr(n)\n", "print(a)\n", "\n", "b = repr(n).find(('napari.viewer.Viewer'))\n", "print(b)\n", "\"\"\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "# We get a `corrected_subneuron_df` that contains `(x,y,z)` coordinates within the downloaded volume for the vertices in the SWC." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "pycharm": { "name": "#%%\n" } }, "outputs": [], "source": [ "\"\"\"\n", "import inspect\n", "a = repr(n)\n", "print(a)\n", "\n", "b = repr(n).find(('napari.viewer.Viewer'))\n", "print(b)\n", "\"\"\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# We get a `corrected_subneuron_df` that contains `(x,y,z)` coordinates within the downloaded volume for the vertices in the SWC." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\"\"\"\n", "import inspect\n", "a = repr(n)\n", "print(a)\n", "\n", "b = repr(n).find(('napari.viewer.Viewer'))\n", "print(b)\n", "\"\"\"" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# We get a `corrected_subneuron_df` that contains `(x,y,z)` coordinates within the downloaded volume for the vertices in the SWC." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\"\"\"\n", "transpose = vox_in_img_list.T\n", "vox_in_img_list_t = transpose.tolist()\n", "\n", "corrected_subneuron_df = s3_trace.generate_df_subset(list(vox_in_img_list_t), subneuron_start = 0, subneuron_end = 5)\n", "print(corrected_subneuron_df)\n", "\"\"\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Convert the SWC to a graph and print some information about the graph." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\"\"\"\n", "G = s3_trace._df_to_graph(df_voxel=corrected_subneuron_df)\n", "print('Number of nodes:', len(G.nodes))\n", "print('Number of edges:', len(G.edges))\n", "print('Sample 1 coordinates (x,y,z):', G.nodes[1])\n", "paths = s3_trace._graph_to_paths(G)\n", "print('Number of paths:', len(paths))\n", "\"\"\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can display the SWC on the Volume" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\"\"\"\n", "%gui qt\n", "napari_viewer(img, shapes=paths)\n", "nbscreenshot(viewer)\n", "\"\"\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Automatically segment the neuron\n", "We start by converting the seed points to a format used by the thresholding." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\"\"\"\n", "seed = [adaptive_thresh.get_seed(sample)[1] for sample in vox_in_img_list]\n", "print(seed)\n", "\"\"\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Next, we compute a confidence-connected threshold segmentation." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\"\"\"\n", "labels = adaptive_thresh.confidence_connected_threshold(img, seed, num_iter=1, multiplier=0.5)\n", "\"\"\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can display the volume, SWC, and segmentation in Napari." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "\"\"\"\n", "%gui qt\n", "viewer = napari_viewer(img, labels=labels, shapes=paths, label_name=\"Confidence-Connected Threshold\")\n", "nbscreenshot(viewer)\n", "\"\"\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Steps to Manually Edit Labels\n", "Labels can be manually edited following these steps:\n", "\n", "1. Ensure Napari is in 2D-slice viewing, not 3D view. (The second button from the bottom left)\n", "2. Click the image layer and adjust the contrast limits as desired.\n", "3. Click the \"Confidence-Connected Threshold Layer\"\n", "4. Click the paintbrush tool and adjust the brush size. Ensure that \"label\" is set to 1 to paint and 0 to erase.\n", "5. Click and drag on the image to adjust labels. Changes are saved automatically, and CMD-Z to undo is supported." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Extract the manual labels for uploading." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# manual_labels = viewer.layers['Confidence-Connected Threshold'].data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Upload the segmentation to AWS." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# %%capture\n", "# ngl_upload = NeuroglancerSession(url+\"_seg\", mip=mip)\n", "# ngl_upload.push(manual_labels, bounds);" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Confirm that the upload was successful. It was!" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# downloaded_labels = ngl_upload.pull_bounds_seg(bounds)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# print(np.all(manual_labels == downloaded_labels))" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.5" }, "pycharm": { "stem_cell": { "cell_type": "raw", "metadata": { "collapsed": false }, "source": [] } } }, "nbformat": 4, "nbformat_minor": 4 }