Workshop Invited Talks
Workshop on Big Data and Visualization for Brainsmatics (BDVB 2017)
Time: 13:30 – 17:30, Nov 16, 2017
A Scalable Image Processing Pipeline for Reproducible Terascale Brain Atlasing
With novel imaging technologies that produces new modalities and higher resolution, neuroscientists can explore more valuable information from the brain. However, this brings challenges to efficiently processing, integrating, analyzing and visualizing such big data. We present a platform that aims to provide a high-performance data processing pipeline to integrate data into a common brain atlas space, following a semantic and provenance based scheme. This allows not only semantic and spatial query of the data but also automated provenance tracking, ensuring the reproducibility of the data processing workflow.
A data processing pipeline has been established to process whole brain mouse image stacks acquired by fMOST imaging system, in which neurons are sparsely stained and can be reconstructed to morphology. The pipeline consists of four major components: 1) Data preprocessing, which includes intensity normalization, shading artifact correction as well as isotropic resampling to ensure that the images are in a standardized format for integration. To cope with large data size and boost the computation performance, these modules are developed using distributed computing frameworks (MPI, Spark). The data is distributed into small blocks and is processed in parallel on a high-performance computing (HPC) infrastructure containing a cluster of 35 nodes. 2) Data anchoring to a brain atlas, spatial anchoring or alignment allows data from different subjects or modalities being positioned in an identical reference system. This is achieved via landmark-based image registration using thin-plate-spline transformation. 3) Data transformation, both the image stack and the reconstructed morphology are transformed to align with the atlas using the transformation obtained from the image registration step. 4) Data visualization, the image stack is converted into the Blue Brain Image Container format (BBIC), a hierarchical representation of volumetric data encapsulated in a HDF5 file allowing visualization in a web application through a dedicated RESTful image service.
To facilitate the automation of the processing pipeline on a cluster based computing environment, the components are managed by a dedicated web service. Jobs can be asynchronously submitted to the web service using a versatile client interface (e.g. Jupyter notebook, web application, etc.) through a RESTful API. This will automatically register the involved data and processing steps into a knowledge graph platform – Blue Brain Nexus.
The Blue Brain Nexus is a domain-independent, provenance-based and semantic data management platform. Through a knowledge graph, it enables the description of a domain of application for which there is a need to create and manage entities, store and manage their provenance and relate them. A provenance template has been designed to standardize the representation of all the data entities and processing activities for brain atlasing. Schemas, which are developed using shape constrained language (SHACL) to ensure the validity and quality of the data description and provenance, captures both semantic information of the data and its spatial coordinates in a common atlas space.
Future work will focus on improving the automation of processing workflow and designing strategies for federated data ingestion.
brain atlasing, high-performance computing, knowledge graph
Sean Hill is co-Director of Blue Brain Project, a Swiss national brain initiative, where he leads the Neuroinformatics division, based at the Campus Biotech in Geneva, Switzerland. He also directs the Laboratory for the Neural Basis of Brain States at the École Polytechnique Fédérale de Lausanne (EPFL).
Dr. Hill has extensive experience in building and simulating large-scale models of brain circuitry and has also supervised and led research efforts exploring the principles underlying the structure and dynamics of neocortical and thalamocortical microcircuitry. He currently serves in management and advisory roles on several large-scale clinical informatics initiatives around the world.
After completing his Ph.D. in computational neuroscience at the Université de Lausanne, Switzerland, Dr. Hill held postdoctoral positions at The Neurosciences Institute in La Jolla, California and the University of Wisconsin, Madison, then joined the IBM T.J. Watson Research Center where he served as the Project Manager for Computational Neuroscience at Blue Brain until his appointment at the EPFL.
Dr. Hill served as the Executive Director (2011-2013) and Scientific Director (2014-2016) of the International Neuroinformatics Coordinating Facility (INCF) at the Karolinska Institutet in Stockholm, Sweden.
Workshop on Brain Big Data Based Wisdom Service (BBDBWS 2017)
Time: 13:30 – 17:30, Nov 18, 2017
Is deep learning brain-like? Using fMRI to reconstruct the image
There is argument whether deep learning is brain-like or not. We first use advanced computing method to establish the mapping model between the visual system and the external visual stimuli, and explore the mechanism of brain visual information processing. And then, we study the relationship between deep learning and brain visual information processing to understand the expression of each layer in the cerebral cortex. Finally, we develop a deep Bayesian generative model to reconstruct the image through fMRI signal.
Workshop on Brain and Artificial Intelligence (BAI 2017)
Time: 13:30 – 17:30, Nov 16, 2017
Brain-inspired intelligence: From brain-inspired learning to brain-inspired conscious machine
Constructing the mechanical brain with inspirations from multi-scale brain structures and operational principles is considered to be a promising approach to achieve the ultimate goal of real machine intelligence. In this talk, I will start with the quest of “Can Machine Think” and “How the Human Mind Can Occur in the Physical Universe”. Under our ongoing efforts on Brain-inspired Cognitive Engine (BrainCog), I will introduce multi-scale brain-inspired neural networks, autonomous brain-inspired learning and decision making, as well as their applications in unmanned aerial vehicles, and robotics conducted in my group. Then I will present our first try on brain-inspired robot self-consciousness and a roadmap for Brain-inspired Conscious Machine.
Yi Zeng, Professor at Institute of Automation, Chinese Academy of Sciences. He is a Deputy Director at Research Center for Brain-inspired Intelligence, and a Principle Investigator at Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences. He is also a Professor at School of Future Technology, University of Chinese Academy of Sciences. His research interest is on Brain-inspired Intelligence, mainly focuses on Cognitive Brain simulation, Brain-inspired Neural Networks, computational model of consciousness, and Brain-inspired living Robotics.