Please contact inns@inns.org if you wish to be a sponsor for IJCNN 2017.
Plenary Speakers
All plenary talks will be in La Perouse room.
Alex Graves
Research Scientist, Google DeepMind
Bio: Research Scientist at Google DeepMind. Canadian Institute For Advanced Research (CIFAR) Junior Fellow at the University of Toronto.
Title: Frontiers in recurrent neural network research
Abstract:
In the last few years, recurrent neural networks (RNNs) have become the Swiss Army knife of large-scale sequence processing. Problems involving long and complex data streams, such as speech recognition, machine translation and reinforcement learning from raw video, are now routinely tackled with RNNs. This talk takes a look at some of the new architectures, applications and training strategies currently being developed in this exciting field.
Stephen Grossberg
Wang Professor of Cognitive and Neural Systems, Boston University
Bio: Wang Professor of Cognitive and Neural Systems, Boston University. Founding President, International Neural Network Society. Founding Editor-In-Chief, Neural Networks. Recipient of INNS Helmholtz Award (2003). IEEE Frank Rosenblatt Award (2017). INNS Fellow. [See full biosketch]
Title: Towards Solving the Hard Problem of Consciousness: The Varieties of Brain Resonances and the Conscious Experiences that they Support
Abstract:
What happens in our brains when we consciously experience sights, sounds, feelings, and knowledge about them? The Hard Problem of Consciousness is the problem of explaining how this happens. To solve this problem, a theory of consciousness needs to link brain to mind by modeling how brain dynamics give rise to conscious experiences, and specifically how the emergent properties of brain dynamics generate properties of individual experiences and of the psychological and neurobiological data that they generate. This talk summarizes evidence that Adaptive Resonance Theory, or ART, is accomplishing this goal. ART is a cognitive and neural theory of how advanced brains autonomously learn to attend, recognize, and predict objects and events in a changing world. ART has predicted that “all conscious states are resonant states” as part of its specification of mechanistic links between processes of consciousness, learning, expectation, attention, resonance, and synchrony. It hereby provides functional and mechanistic explanations of data ranging from individual spikes and their synchronization to the dynamics of conscious perceptual, cognitive, and cognitive-emotional behaviors. ART has now reached sufficient maturity to begin classifying the brain resonances that support conscious experiences of seeing, hearing, feeling, and knowing. The talk will review various of these resonances, their similarities and differences, including where they occur in our brains; how they interact when we feel and know about what we see and hear; and various of the normal and clinical psychological and neurobiological data that they explain and predict, and which have not been explained by alternative theories. The talk will mention some resonances that do not become conscious, and why, including why not all brain dynamics are resonant in terms of the computationally complementary organization of cortical processing streams.
Odest Chadwicke Jenkins
Associate Professor of Computer Science and Engineering, University of Michigan
Bio: Associate Professor of Computer Science and Engineering, University of Michigan. Sloan Research Fellow, Recepient of the Presidential Early Career Award for Scientists and Engineers (PECASE), and young investigator award from Office of Naval Research and Air Force Office of Scientific Research.
Title:Perception of People and Scenes for Robot Learning from Demonstration
Abstract:
We are at the dawn of a robotics revolution where the visions of interconnected heterogeneous robots in widespread use will become a reality. Similar to "app stores" for modern computing, people at varying levels of technical background will contribute to "robot app stores" as designers and developers. However, current paradigms to program robots beyond simple cases remains inaccessible to all but the most sophisticated of developers and researchers.
In order for people to fluently program autonomous robots, a robot must be able to interpret commands that accord with a human’s model of the world. The challenge is that many aspects of such a model are difficult or impossible for the robot to sense directly. We posit the critical missing component is the grounding of symbols that conceptually tie together low-level perception with user programs and high-level reasoning systems. Such a grounding will enable robots to perform tasks that require extended goal-directed autonomy as well as fluidly work with human partners.
Towards making robot programming more accessible and general, I will present our work on improving perception of people and scenes to enable robot learning from human demonstration. Robot learning from demonstration (LfD) has emerged as a compelling alternative to explicit coding in a programming language, where robots are programmed implicitly from a user’s demonstration. Phrasing LfD as a statistical regression problem, our multivalued regression algorithms will be presented for learning robot controllers in the face of perceptual aliasing. I will also describe how such regressors can be used within physics-based estimation systems to learn controllers for humanoids from monocular video of human motion. With respect to learning for sequential manipulation tasks, our recent work aims to perceive axiomatic descriptions of scenes from depth for planning goal-directed behavior.
Christof Koch
President and Chief Scientific Officer, Allen Institute for Brain Science
Bio:Professor of Biology and Engineering at the California Institute of Technology in Pasadena. Chief Scientific Officer of the Allen Institute for Brain Science in Seattle. INNS Fellow.
Title:Big Science, Team Science, Open Science for Neuroscience
Abstract:
Over the past decade, the Allen Institute for Brain Science has produced a series of brain atlases. These are large (3 TB, > million slides) public resources, integrating genome-wide gene expression, and neuroanatomical data across the entire brain for developing and adult humans, non-human primates and mice, complemented by high-resolution, cellular-based anatomical connectivity data in several thousand mice. It is the largest integrated neuroscience database world-wide. Anybody can freely access this data without any restrictions at www.brain-map.org.
Six years ago, we embarked on an ambitious 10-year initiative to understand the structure and function of the neocortex and associated satellite structures in humans and mice. We are setting up high through-put pipelines to exhaustively characterize the morphology, electrophysiology and transcriptome of cell types well as their synaptic interconnections in the laboratory mouse and in human neocortex (via a combination of fetal, neurosurgical and post-mortem tissues). We are building brain observatories to image the activities of 10,000s of neurons throughout the cortico-thalamic system in behaving mice, to record their electrical activities, and to analyze their connectivity at the ultra-structural level. We are constructing biophysically detailed as well as simplified computer simulations of these networks and of their information processing capabilities focusing on how the neocortical tissue gives rise to perception, behavior and consciousness.
Jose C. Principe
Distinguished Professor, University of Florida
Bio: Distinguished Professor of Electrical and Biomedical Engineering at the University of Florida. Recipient of INNS Gabor Award (2006). INNS Fellow. [See full info]
Title: A Cognitive Architecture for Object Recognition in Video
Abstract:
This talk describes our efforts to abstract from the animal visual system the computational principles to explain images in video. We develop a hierarchical, distributed architecture of dynamical systems that self-organizes to explain the input imagery using an empirical Bayes criterion with sparseness constraints and dual state estimation. The interpretation of the images is mediated through causes that flow top down and change the priors for the bottom up processing. We will present preliminary results in several data sets.
Hava Siegelmann
Program Manager, DARPA
Bio: Professor of Computer Science and Core Member of the Neuroscience and Behavior Program at University of Massachusettes, Amherst. Program Manager at DARPA. Recipient of INNS Hebb Award (2016). [See full biosketch]
Title: How brain architecture leads to abstract thought
(Joint with Patrick Taylor)
Abstract:
Using 20 years of functional magnetic resonance imaging (fMRI) data from tens of thousands of brain imaging experiments, our recent research suggests how the physical brain could give rise to abstract thought. The work demonstrates not only the basic operational paradigm of cognition, but shows that all cognitive behaviors exist on a hierarchy, starting with the most tangible behaviors such as finger tapping or pain, then to consciousness and extending to the most abstract thoughts and activities such as naming. This hierarchy of abstraction is found related to the connectome structure of the whole human brain.
Paul Werbos
Program Director (retired), National Science Foundation
Bio: Former program director of National Science Foundation. Recipient of INNS Hebb Award (2011). INNS Fellow.
Title: Backpropagation in the Brain and More Advanced Learning Systems
Abstract:
The recent explosion of interest in deep learning based on backpropagation is the result of empirical demonstration and testing of methods developed long ago, funded by NSF, DARPA and Google. The usual convolutional neural networks are not a valid model of computing in the cerebral cortex, because they assume Euclidean symmetry and are unable to learn simple mappings required, for example, in learning how to navigate a cluttered space; however, more general networks were also developed years ago, and demonstrated on less popular problems like power grid forecasting and incremental chess playing. This year empirical tests were also carried out on 24khz data from prefrontal cortex, strongly supporting our original theory of brain intelligence in which regular clocks and alternative forward and backward passes explain the power of cortical computation, and are preferred in the data over the more ancient theories of pure asynchronous computing by spiking networks or ODE. An empirical pathway has also been laid out to allow physical backpropagation of information, which promises to enable a new level of general intelligence through analog quantum computing more “conscious” than what we see in the mammal brain.