CNS*2010 Workshop

High-throughput 3D microscopy and high-performance computing for multi-scale modeling and simulation of large-scale neuronal circuits

July 30, 2010, San Antonio, TX

Back to the main page
Organizers: Yoonsuck Choe, John Keyser, and Louise C. Abbott, Texas A&M University

Abstracts:


Louise C. Abbott, David Mayerich, Jaerock Kwon*, and Yoonsuck Choe (Texas A&M University, *Kettering University)

High-throughput imaging of whole mouse brain using the Knife-Edge Scanning Microscope

The primary goal of the Brain Networks Laboratory at Texas A&M University is to map connectivity of neuronal and microvascular networks in the mammalian brain so we can use this information to develop accurate models of how the normal brain functions and how disease processes alter or destroy normal brain function. Dr. Bruce McCormick and his colleagues in the Brain Networks Laboratory developed a unique instrument called the Knife Edge Scanning Microscope (KESM) that allows the use of high-resolution imaging techniques to provide information that is necessary to develop accurate computational models of neuronal interconnections and brain vascularization. The KESM is used to sample large regions of tissue, including the entire mouse brain, at a resolution of less than one micron per sample in all three dimensions. We have developed or modified several different staining methods to visualize neurons in the whole mouse brain, including thionin and Golgi-Cox stains and we have used perfusion with India ink to visualize vasculature to the level of capillaries in the central nervous system. The processed mouse brains are embedded in araldite before sectioning and all sampled tissues greatly exceed what can be imaged in a reasonable time using other, standard techniques. We are able to use the acquired information to construct large-scale, three-dimensional data sets. This technology allows us to use high-resolution observations of the neural and vascular structure of the mouse brain to develop and test computational theories that reflect how neuronal networks function on a broad scale.


Kenneth J. Hayworth, Narayanan Kasthuri, Richard Schalek, Juan C. Tapia, Jeff Lichtman (Harvard University)

Large Volume Neural Circuit Reconstruction Using the Tape to SEM Process

Brain function is defined at the level of the synaptic connectivity between neurons. For example, the receptive field properties distinguishing one visual cortex cell from another are defined precisely by which neurons synapse on that cell. Obtaining accurate maps of neuron to neuron connectivity within and between brain regions is crucial for understanding brain functioning. Mapping connectivity is made difficult due to the fact that individual axonal and dendritic processes can coarse many millimeters across the brain and can shrink to less than 50nm in diameter. In order to efficiently trace neuronal circuits over volumes spanning millimeters we developed an automated tape-collection mechanism that can be attached to a commercial ultramicrotome. With this device we can collect thousands of ultrathin tissue sections (~30nm thick) on one long continuous tape made of plastic film, and can do so in a matter of hours with no human intervention required during the process. The resulting "tissue tape" is then stained with heavy metals and placed in a scanning electron microscope (SEM) for imaging at 5nm resolution – sufficient to trace synaptic connectivity. A key advantage of this process is that any region of the tape can be randomly accessed for imaging at any desired resolution, allowing for time efficient tracing of neural circuits spanning cubic millimeter volumes and larger.


Daniel Berger and H. Sebastian Seung (MIT)

Semi-automatic SEM imaging and analysis of neuronal connectivity using ATLUM/ATUM slice stacks.

With the ATLUM (Automatic Tape-collecting Lathe Ultra-Microtome) or ATUM (Automatic Tape-collecting Ultra-Microtome) technique developed in the lab of Jeff Lichtman at Harvard University, ultrathin slices (< 30 nm thick) can be cut from a block of brain tissue embedded for electron microscopy and automatically collected on a tape. With this method stacks of thousands of slices can be acquired without loss. Strips of tape are then mounted on a silicon wafer for imaging in a scanning electron microscope (SEM).

To create a three-dimensional SEM image volume of a region of the tissue, the same area of the tissue sample has to be imaged in many slices. We use a novel approach to automate this process. An optical image of the wafer holding the tape strips is taken with a digital camera under lighting conditions which make the slices visible. Then a stage map file for the SEM is generated from the slice locations in the image. The microscope then takes overview images with a large field of view of each slice automatically. These images are aligned in the computer and the result of the alignment is used to refine the stage map. A second round of imaging and alignment at higher magnification can be done to improve the stage positions further. The accuracy of the resulting stage map is limited by the precision of mechanical stage movement, but can reach a few micrometers in our case.

Imaging all slices at maximal resolution is unfeasible with current SEMs, since imaging is several orders of magnitude slower than cutting. However, since the ATLUM technique is non-destructive, we can take overview images at low magnification and then decide on specific regions of interest of which we take images at very high resolution. Using this method, large dendritic processes and myelinated axons can be traced over large distances in the low-magnification overview images, while the actual synaptic connectivity of these processes can be investigated locally by using the high-magnification images. The slices cut by ATLUM can be several mm^2 large, which allows us to investigate larger neuronal systems like the cortical column, or entire small brains. At high magnification, image resolution is good enough that most, if not all, axons, thin dendritic spines and synapses can be resolved. In this talk I will give an overview of the imaging and image alignment methods used for ATLUM slice stacks, and will present recent results of local structure and connectivity analysis from a large stack of mouse cortex.


Brad Busse, Kristina Micheva, and Stephen J. Smith (Stanford University)

Large scale synaptic analysis with Array Tomography

The synapse is the smallest discrete computational unit in the brain. Through a proteomic diversity which isn't always observable via ultrastructure alone, synapses undergo plasticity and operate over a wide functional spectrum. A lack of methods for characterizing the composition of individual synapses /in situ/ has so far hindered our abilities to explore synaptic molecular diversity. My talk will focus on describing the use of array tomography, a new high-resolution proteomic imaging method, to explore the molecular composition of glutamate and GABA synapses in somatosensory cortex of transgenic YFP-H mice with the aid of supervised machine learning for large-scale synaptic classification.


Pablo Blinder1, Philbert S. Tsai1, John Kaufhold2, and David Kleinfeld1 (University of California, San Diego1 and SAIC2)

Reconstruction of the cortical vascular network in mouse

We use an all-optical-histology (AOH) technology developed in our lab to process large volumes of histological samples suited for fluorescent image acquisition. AOH combines two-photon laser scanning microscopy to acquire volumetric data and plasma-mediated laser ablation to remove a portion of the imaged sample. Our focus centers around the organization of cortical vasculature around well defined neuronal units. For this purpose, our data sets consist of three to five cubic millimeter volumes, at a resolution of 1 μm3/voxel, that span ten or more cortical columns, i.e., barrels, across the mouse primary somatosensory cortex. We form a vectorized representation of the vasculature and the location of all cell nuclei and label all nuclei as neuron or non-neuron.

In this talk, I will present an overview of the AOH technique as well as a description of algorithms developed to register, segment and vectorize the thousands of blocks that comprise an AOH dataset. As we move towards detailed topological analysis of the vasculature network as well as simulation of blood flow, we pay special attention to spurious gaps and spurious mergers in the vascular graph that occur due to one or more upstream causes, including sample preparation, imaging, segmentation, vectorization or a combination of these. I will present our novel approach to gap-filling based on local threshold relaxation of the segmented volume around each gap. I will also describe related work in deleting spurious connections in the vascular graph. For both gap-filling and strand deletion, we use validated machine learning methods to assess the potential improvement in detection performance of these methods. The machine learning methods provide confidence levels for both reconnection as well as removal of spurious connections.


Chris Bjornsson and Badri Roysam (RPI)

Mapping the Glio-vascular Infrastructure of Brain Tissue

Nervous tissue is arguably the most structurally complex tissue in the body, presenting a supreme challenge to efforts directed at imaging diverse, intricate cell types and extracting quantitative information from massive, multidimensional datasets. Initial studies on the biological reactive responses to inserted silicon neural prosthetic devices have been greatly enabled by advances in FARSIGHT software, and by leveraging the flexibility of this analysis platform have led to more broadly-based investigations into the interaction of glia and neurovasculature in health and disease. While many imaging efforts focus predominantly on neuronal architecture and connectivity, the central role of glia and vasculature in a number of disease states, drug delivery strategies, and normal homeostasis make them ideal subjects for study. Building on multilabel immunohistochemistry of thick tissue slices, spectral confocal imaging, automated montaging of 3D datasets representing vast expanses of brain tissue, and fully automated segmentation and classification of tortuous blood vessels and tens of thousands of cells within a single dataset, we are currently developing sophisticated labeling and unmixing strategies, cell process tracing algorithms, and the only validation interface capable of thoroughly interrogating datasets of this magnitude. As these developments are implemented, we are prepared to map the glio-vascular architecture of the brain in its entirety.


David Mayerich1, Yoonsuck Choe2, and John Keyser2 (1University of Illinois, Urbana-Champaign and 2Texas A&M University)

Segmentation and Visualization of High-Throughput Microscopy Datasets

High-throughput microscopy data sets often contain densely packed structures, making volumes cluttered and difficult to interpret. In addition, biological structures such as neurons and microvessels form complex networks composed of thin filaments that are highly intertwined and span large volumes of tissue. Understanding their structure and connectivity therefore requires working with large data sets, further complicating an already cluttered visualization. In this talk, we discuss methods used to segment network structures in volumetric data and store the results efficiently on the GPU, allowing users to interactively explore neuronal and microvascular networks. We first segment these networks using tracking methods to find an underlying skeleton. We then store the volumetric data describing network components using a GPU-based data structure that provides significant compression while allowing interactive rendering and dynamic selection of rendered components. The source code and software will be made available online as well as sample high-throughput data sets for experimentation.


Randal Koene (Fatronic Tecnalia Foundation, San Sebastian, Spain)

Using new in-vivo techniques to add function to reconstructions from high-throughput micrsocopy

Successful high-throughput microscopy can lead to high-volume, high-resolution image data stacks. Current challenges have involved the reconstruction of that image data into recognizable three-dimensional structures, such as vasculature or axons and dendrites. Once this is accomplished, the reconstructed anatomy must be reliably transitioned into a 3D model of mathematically abstracted components with defined biophysical functions. The transition to such a model relies on correlations between geometric anatomy and functional dynamics. It may be possible to create a library of identified characteristic physiological components with reliable correlations to anatomy. Even with a very good library of this kind, there is a probability of functional drift by cumulative error within a model that combines many of these abstracted elements. We would like to be able to investigate the effects of these differences between model and biology, with the potential to learn how to compensate for the differences in a satisfactory manner. To compare and investigate the differences, techniques for large-scale recording in-vivo will be very valuable, particularly any techniques that can achieve both large scale and high resolution. Interesting cutting-edge techniques involve recording in slice or whole brains with multi-electrode arrays containing up to thousands of electrodes. Beyond this, we may speculate about the promise of developments in synthetic biology and its application to neuroscientific investigation. Concepts of DNA-writing, as pioneered in work led by J. Craig Venter, indicate how we may harness the existing capabilities of our physiology that is already able to read the relevant signals within our brains, in order to generate new signals that can be more readily extracted at large scale and high resolution.


Andrew Duchowski (Clemson University)

Eye-tracking technology and its potential application to tracing and validation of microscopy data

The talk explores the potential use of aggregate human eye movement analysis to augment or drive visualization of brain tissue data imaged via 3D microscopy. A selective, gaze-directed volumetric rendering strategy is sought derived from collected eye movements over video clips of dynamic media (digital movie clips) recorded during volumetric inspection of microscopic data, e.g., vasculature. Experts' eye movements (e.g., those of neuroscientists) are expected to indicate informationally salient topographic regions at which graphics resources can be targeted to streamline as well as select and highlight informative structures for volumetric reconstruction. We hypothesize that for this given top-down cognitive task (visual selection of important regions), recorded gaze fixations, or scanpaths, will differ significantly from those predicted by leading bottom-up automatic saliency map models. Techniques suitable for quantitative testing of this hypothesis are discussed, including aggregate eye movement analysis and comparison. In particular, a group-wise scanpath similarity metric is proposed.


Todd Huffman (3Scan) and Peter Eckersley (Electronic Freedom Foundation)

Large-scale, collaborative scanning, and the role of commercialization

High throughput 3D microscopy will lead to large scale commercial applications, which offers the potential for large data sets available for modeling. Licensing is often a barrier to public and private collaboration, and new open licensing models are under development to facilitate the creation of a data 'commons'. In this talk I will briefly discuss the issues and advances being made towards community-friendly licensing and implications for scientists engaged in high-throughput and modeling.


Yoonsuck Choe (Texas A&M University) and Jaerock Kwon (Kettering University)

Open issues in high-fidelity simulation of the connectome

In this brief talk, I will discuss open issues regarding high-fidelity simulation of the connectome (in general). First, I will talk about the four stages in connectomics research: (1) data acquisition, (2) tracing and reconstruction, (3) simulation of computational models, and (4) analysis of simulation results. Next, I will discuss issues arising in each of these steps, and the importance of a good theoretical framework to guide the overall project.


Sponsors:

  • 3Scan
  • Organization for Computational Neuroscience (OCNS)











$Id: abstracts.html,v 1.1 2010/07/25 14:06:25 choe Exp choe $