2011 International Joint Conference on Neural Networks

San Jose, California      July 31 - August 5, 2011


Sponsoring Organizations


www.inns.org
www.inns.org


Important Due Dates

Competitions Proposals
August 31, 2010

Tutorials Proposals
December 15, 2010

Special Session Proposals
December 15, 2010

Workshop Proposals
December 15, 2010

Panel Proposals
December 15, 2010

Decision Notification
(Competitions, Panels &
Special Sessions)
December 20, 2010

Decision Notification (Tutorials,Workshops)
January 5, 2011

Paper/abstract Submission
Feb. 10, 2011

Decision Notification (Papers/Abstract)
April 10, 2011

Final Submission (Papers)
May 5, 2011

Workshop Schedule


Session 1: Thursday, August 4, 2011 (Afternoon)
Future Perspectives of Neuromorphic Memristor Science & Technology
Robert Kozma and Robinson Pino
PROBLEMS AND CHALLENGES MAPPING SPIKING NEURONS TO COGNITION AND BEHAVIOR
Asim Roy, Juyang Weng, Leonid Perlovsky, Nik Kasabov, Harry Erwin, Marley Vellasco, Narayan Srinivasa, Yoonsuck Choe, Paolo Arena, Dileep George, Yan Meng and Sylvie Renaud
Concept Drift and Learning in Nonstationary Environments
Robi Polikar, Cesare Alippi, Manuel Roveri, Haibo He
Cognition and the Fringe: Intuition, Feelings of Knowing, and Coherence
Bruce Mangan, Bernard J. Baars, Uzi Awret
Integral Biomathics
Leslie Smith, Plamen Simeonov , Andree Ehresmann
Results & Methods of the Neural Network Grand Forecasting Challenge on Time Series Prediction
Sven F. Crone, Nikolaos Kourentzes
 
Session 2: Friday, August 5, 2011 (Morning)
Neuromorphic Hardware: design and applications I
Sylvie Renaud, Shih-Chii Liu, Hsin Chen, Eugenio Culurciello
IJCNN Competitions Workshop I
Isabelle Guyon and Sven Crone
 
Session 3: Friday, August 5, 2011 (Afternoon)
Neuromorphic Hardware: design and applications II
Sylvie Renaud, Shih-Chii Liu, Hsin Chen, Eugenio Culurciello
IJCNN Competitions Workshop II
Isabelle Guyon and Sven Crone
 

Abstract

PROBLEMS AND CHALLENGES MAPPING SPIKING NEURONS TO COGNITION AND BEHAVIOR

Chairs Narayan Srinivasa (HRL Laboratories)
Asim Roy (Arizona State University)
Sponsor Autonomous Machine Learning (AML) SIG of INNS
Participants Leonid Perlovsky - Harvard University and The Air Force Research Laboratory, USA
Nik Kasabov - KEDRI/AUT, New Zealand
Juyang Weng - Michigan State University, USA
Yoonsuck Choe - Texas A&M University, USA
Paolo Arena - University of Catania, Italy
Harry Erwin - University of Sunderland, UK
Marley Vellasco - Pontifícia Universidade Católica do Rio de Janeiro, Brazil
Narayan Srinivasa, HRL Laboratories, USA
Asim Roy, Arizona State University, USA
Dileep George, Vicarious Inc
Yan Meng
Sylvie Renaud ,IMS, University of Bordeaux, FR
link http://www.autonomoussystems.org/news/news.htm#IJCNN2011Workshop
Format Short presentations by participants followed by group discussion with audience participation. Could lead to future research collaborations in this challenging area.
Abstract Biological systems are adept at learning such that they are capable of exhibiting robust, efficient and intelligent behaviors under constantly changing and/or new conditions. Recent discoveries in neuroscience suggest that the brain utilizes a spike based communication for energy efficient computation. It also suggests that the brain employs spike timing dependent plasticity or STDP as the fundamental spike timing based learning rule to accomplish various learning tasks. This field is however nascent and a lot of research work needs to be done to bridge the gap between cellular mechanisms such as STDP and how large scale behavioral functions can be realized from these basic mechanisms. The focus of this workshop would be to explore computational modeling of such spike based networks that can realize system level behavioral functions from networks composed of spike based cellular mechanisms. It is believed that addressing this gap will enable the realization of autonomous learning systems that scale in both computational efficiency and behavioral complexity compared to biological systems.
 

CONCEPT DRIFT AND LEARNING IN NONSTATIONARY ENVIRONMENTS

Chairs Robi Polikar - Rowan University,
Cesare Alippi - Polytechnic Di Milano,
Manuel Roveri- Polytechnic Di Milano,
Haibo He - University of Rhode Island,
Topics of Interest and Scope The scope of our discussion will include the following topics:
  • Incremental learning / lifelong learning / cumulative learning
  • Data mining from streams of data
  • Drift / change / anomaly detection in data streams
  • Learning in non-stationary environments / concept –drift environments / dynamic environments
  • Architectures / techniques / algorithms for learning in such environments
  • Applications that call for incremental learning or learning in nonstationary environments
  • Development of test-sets / benchmarks for evaluating algorithms learning in such environments
  • Issues relevant to above mentioned or related fields
  • Format Introduction - After a brief introduction, introducing the field, the participants will provide short presentations on the current state of the art in concept drift, drift and anomaly detection, and learning in nonstationary environments. We will then open the floor to a forum discussion on such topics as existing and open problems, challenges, potential future problems and solutions, future funding and collaboration opportunities.
    Abstract One of the fundamental goals in computational intelligence is to achieve brain like intelligence, a remarkable property of which is the ability to incrementally learn from noisy and incomplete data, and ability to adapt to changing environments. The ability of a computational model to learn under various environments have been well-researched with promising progress, but a vast majority of these efforts make two fundamental assumptions: i) there is sufficient and representative training data; and ii) such data are drawn from a fixed - albeit unknown -distribution. Alas, these assumptions often do not hold in many applications of practical importance. Recent efforts towards incremental and online learning may allow us to relax the "sufficiency" requirement by continuously updating a model to learn from small batches of data. Yet, in many incremental learning algorithms the second assumption still remains: the data that may incrementally become available are still drawn from a fixed - but yet unknown - distribution. More recently, other incremental approaches - also called concept drift algorithms - have attempted to remove this assumption, by allowing a stream or batches of data whose underlying distribution change over time. These early approaches, however, make other assumptions such as restricting the type of change in the distribution, are primarily of heuristic in nature with many free parameters requiring fine-tuning, and have not been evaluated on large scale real-world applications.
    Considering that our ultimate goal in computational intelligence is to attain brain-like intelligence, and that the plasticity of brain-like intelligence can, and routinely does, learn incrementally and in nonstationary dynamic environments, the need for a framework for learning from - and adapting to - a nonstationary environment is very real. Combined with a growing number of real-world applications that can immediately benefit from such algorithms, such as learning from financial data, climate data, etc., it is clear that there is much work to be done for solving the nonstationary learning problems.
    There is a small, but growing body of literature indicating the increasing prominence of concept drift applications. The target audience of this workshop are both the general computational intelligence community who may not be familiar with this area, as well as those actively engaged in concept drift research . Hence our goals are
    1. Introduce the problem of concept drift, nonstationary learning and more generally dynamic learning, and its associated issues, to the greater neural network & computational intelligence community who may not have been familiar with the topic, yet would like to familiarize themselves with the most recent approaches for solving this problem;
    2. Provide a forum for researchers who have been actively working in this area to exchange new ideas with each other, as well as with the rest of the neural network & computational intelligence community.
       

    Cognition and the Fringe: Intuition, Feelings of Knowing, and Coherence

    Chairs Bruce Mangan, Institute of Cognitive and Brain Studies, UC Berkeley
    Bernard J. Baars, The Neurosciences Institute, San Diego,
    Uzi Awret, George Mason University
    Topic 1 HOW CONSCIOUSNESS INTEGRATES PARALLEL AND SERIAL INFORMATION: Using reverse engineering and phenomenology to explore a natural hybrid system
    Bruce Mangan
    (1) Human cognition combines parallel and serial processing at the neural level, and this hybrid structure is also reflected in consciousness. The sensory contents in the focus of attention operate serially. The pervasive non-sensory experiences in the background of consciousness - the "fringe" as William James called it - work to represent, in condensed form, vast amounts of non-conscious parallel information. The fringe mediates virtually every aspect of conscious cognition, including perception, language comprehension, problem solving, and voluntary retrieval.
    (2) The fringe contains a huge number of analytically distinct experiences, but probably the most important, and almost certainly the most protean, is what I have called "rightness" (Mangan, 1993). James gave this experience many different names, including "the feeling of right direction" and "dynamic meaning," but did not consider its functional relation to non-conscious processes. In the current psychological literature, the term Feeling-of-Knowing (FOK) derives from this aspect of James' work, but unfortunately FOK now confounds the evaluative component in fringe experience with many other fringe contents (Mangan, 2001).
    (3) Functionally, rightness may be the single most important cognitive datum, since it constitutes the evaluative nexus joining conscious and non-conscious processing, and is a crucial source of feedback. Rightness signals the degree to which, at any give moment, the contents of consciousness fit with a vast body of parallel context information processed at the non-conscious level. Habituation aside, the more tightly our conscious/non-conscious system is integrated, the stronger the experience of rightness. Rightness is the core element in feelings of coherence, wholeness, meaningfulness, making sense. It is the problem solving Ah Ha! Right! YES! It is at the heart of aesthetic experience and some forms of mystical experience.
    (4) We will review James' descriptive account of rightness and the fringe, and then augment it with a reverse engineering, functional analysis. In part this convergence gives James' introspective claims further weight. For many peculiar aspects of fringe experience (e.g., it has no evident sensory content and actively resists the "grasp" of focal attention) turn out to make good bio-design sense. We can explain the character and structure of our phenomenology via its function - just as we explain other features of our biology. In this case we will see how the experience of rightness works to bridge the parallel/serial divide.
    Time permitting, we will also consider ways "goodness-of-fit" metrics throw light on the architecture of the massively parallel and distributed non-conscious processes that "compute" rightness.

    Topic 2 The Dumb Hypothesis: Are subjective “Feelings of Knowing” just global broadcasts from the non-sensory cortex?
    Bernard J. Baars
    Feelings of Knowing (FOKs) are spinning whirlpools in the stream of consciousness. They are ever-present, but each one is fleeting and diaphanous.
    If those words gave you pause, you just experienced a feeling of knowing (FOK). We can study FOKs by asking people to recall an unusual word. When we know that we know a certain word, but can’t recall it, we may have a "tip of the tongue" state for several seconds. TOT's are a subset of FOK's. I use the tip-of-the-tongue case to show that:
    (1) Some FOKs include declarative, empirical propositions about our own minds as well as the outer world. They may be subjectively experienced as “fringy” and vague, but they can be quite precise in their empirical truth conditions.
    (2) FOK’s are also called "judgments" (as opposed to "percepts"). Or “intuitions,” “ideas,” “states of mind,” “meanings,” “intentions,” “beliefs,” “attitudes,” and “expectations.” The emotions frequently lead to FOK’s about ourselves, like “anxiety” or “love.” Wundt called “the intention to express a thought” a Gesamtvorstelling, which can be translated as an “integrated FOK.” Shakespeare’sKing Lear provides an elegant example of “signal anxiety,” an FOK that tells us what not to think about. FOK’s play a very wide range of cognitive roles.
    (3) FOKs are not limited to the visual periphery, but seem to include all accurately reportable mental events that are not percepts, vivid images, inner speech, or dream images. (Mangan, 1993; 2001)
    (4) The Dumb Hypothesis. It follows that “fringe” experiences may simply be conscious ‘global broadcasts’ emanating from the non-sensory cortex. This includes TOT’s (prefrontal, temporal), gut feelings (insula), intentions (motor and pfc), a mystical sense of presence and spaciousness (parietal), feelings of comprehension (right prefrontal, temporal?), self-related (precuneus), love (frontomedial), mental effort (ACC & PfC).
    The Dumb Hypothesis is testable and has received some empirical support.
    Topic 3 The fringe in the brain
    Uzi Awret
    Fringe consciousness is typified by conscious mental states such as "tip of the tongue" experiences, the feeling of being right, "knowing that you know before you know" as well as aesthetic and religious sensibilities, abstract and non-specific states of consciousness, the penumbra of conscious mental states in general and possibly all non-sensory conscious mental states.
    I will explore:
    (a) The brain correlates of fringe conscious events compared to focal ones. In particular, Kounios & Beemann (2004) have found evidence for different alpha and theta activity in the "Aha!" experience. Ahissar and Hochstein's Reverse Hierarchical Theory has a bearing on these differences. Antonio Damasio's recent writing on The Feeling of Knowing willl also be discussed.
    (b) Phenomenological differences. I will introduce Zizek's use of Alfred Hitchcock's movies to elucidate Lacan's 'point of caption,' relating it to the feeling of being right and Velasquez's 'Las Meninas' as an exploration of the fringe.
    (c) Finally, a better understanding of fringe experiences are needed to pinpoint neural corelates of consciousness.
       

    INTEGRAL BIOMATHICS

    Chairs Leslie Smith
    Plamen Simeonov
    Andree Ehresmann
    Abstract Are we nearly there yet? (This is common question asked by small children in the back of the car.) Are the definitions of what is life (Schrödinger, Varela), life itself (Rosen) and more than life itself (Louie) convergent? Did we really expect a discovery such as the arsenic-incorporated Mono lake bacteria1 to start redefining our concept of life? Are life forms inevitable in the evolution of the Universe? Where is the boundary (or is there a boundary?) between the machine and the organism? Are our theories of living and cognitive systems strong enough? Does our underlying understanding of computation and cognition, of the difference between living, non-living and synthetic systems, of sensing and action suffice for building systems that really can mimic living systems, for building systems that really can cope with the vagaries of real environments, of real sensors, and real actuators that don’t always behave quite as model systems do. Or is there a theoretical (or even not so theoretical) underlying area that is still missing? These the basic questions posed by the emerging discipline of Integral Biomathics which we wish to address in this workshop. Opinions are divided – some seem to believe we have only a number of areas to fill in, whereas other reckon that our understanding is severely limited. What are the perspectives for short term and long term research? We expect to draw the attention of a broad interdisciplinary audience from such fields as mathematics, physics, chemistry and biology to computer science and engineering. To enable broad participation, this workshop will include traditional talks, position statements and pecha-kucha 20x20 slides.
       

    Neuromorphic Hardware: design and applications

    Chairs Sylvie Renaud, IMS, University of Bordeaux, FR
    Shih-Chii Liu , ETH Zurich, Switzerland
    Hsin Chen, NTHU, Taiwan
    Eugenio Culurciello, e-Lab, Yale University, USA
    Topic This workshop addresses design techniques and applications of neuromorphic hardware in standard an emerging technologies. Applications will be presented using neuromorphic hardware in the context of biomedical and bio-inspired systems will be presented. The workshop will include discussions about the development of standards for the neuromorphic engineering community (information coding, communication protocols, description language).
    Goal Propose interactive state-of-the-art lectures and generate discussions and proposals of original applications for neuromorphic hardware.
    Note: Participants will vote for the best "neuromorphic proposal" who will receive a one-of-a-kind "IJCNN neuromorphic prize" ...
    Lecture S. Renaud (University of Bordeaux, FR)
    Technologies for VLSI Spiking Neural Networks (SNN): subthreshold CMOS , above threshold CMOS, SiGe, nanotechnologies): neurons, synapses and on-chip memory.
    R. Pino (Air Force Research Laboratory, NY, USA)
    "Memristor technology in the context of neuromorphic computation: a brief
    Shih-Chii Liu (ETH - Switzerland)
    AER communication protocol and devices for SNN; application to the silicon cochlea and the speech recognition task.
    Hsin Chen (NTHU, Taiwan):
    Automatic fly-behaviour monitoring microsystems based on spiking image sensors and event-driven protocols.
    Leslie Smith (U. Stirling, Scotland, UK)
    Neuromorphic Systems: a sideways look
       

    IJCNN Competitions

    Details and Schedules please refer to  http://www.ijcnn2011.org/competitions.php
    Chairs Isabelle Guyon
    Sven Crone
    Target audience - Competition participants - Practitioners interested in competition results - Researchers in the domains covered by the challenge: data mining, vision, unsupervised learning, transfer learning, time series forecasting, social networks.
    Abstract Challenges have recently proved a great stimulus for research in machine learning, pattern recognition, and robotics. Robotics contests seem to be particularly popular, with hundreds of events every year, the most visible ones probably being the DARPA grand challenges of autonomous ground vehicle navigation and RoboCup, featuring several challenges for robots including playing soccer or rescuing people. In data mining and machine learning, several conferences have regularly organized challenges over the past 10 years, including the well established Text Recognition Conference and the Knowledge Discovery in Databases cup (KDD cup). More specialized pattern recognition and bioinformatics conference have also held their own contests, e.g. CASP for protein structure prediction, DREAM for reverse engineering biological networks and ICDAR for document analysis. The European network of excellence PASCAL has actively sponsored a number of challenges around hot themes in machine learning and vision, which have punctuated workshop at NIPS, CVPR, ICCV, AISTATS and other conferences. These contests are oriented towards scientific research and the main reward for the winners is to disseminate the product of their research and obtain recognition. In that respect, they play a different role than challenges like the Netflix prize and the Yahho! contest, which offer large monetary rewards for solving a task of value to the Industry, but are narrower scope.

    Attracting hundreds of participants and the attention of a broad audience of specialists as well as sometimes the general public, these events have been important in several respects: (1) pushing the state-of-the art, (2) identifying techniques which really work, (3) attracting new researchers, (4) raising the standards of research, (5) giving the opportunity to non-established researchers to make themselves rapidly known.

    The IJCNN conference has been supporting a challenge program for the past few years (see http://clopinet.com/isabelle/Projects/IEEE/). This year, the competition program includes 5 competitions. With the generous sponsorship of IJCNN, the winners will earn a free registration. In addition to the oral presentations of the organizers and the winners, other challenge participants will have the opportunity to show posters. There will be a keynote presentation by a funding agency.
       
       

    Results & Methods of the Neural Network Grand Forecasting Challenge on Time Series Prediction

    Chairs Sven Crone - Lancaster University, UK
    Nikolaos Kourentzes -Lancaster University, UK
    Topic of Insterest and Scope The scope of the workshop focuses on:
         Presenting results for the 2nd leg of the NNGC forecasting competition (through the competition organizer)
         Presenting methods & algorithms used in the competition (through the contestants)
         Discussing future improvements of competitions such as this.
    Format After a brief introduction of the multi-year competition (started at WCCI 2010) including datasets and results of the first round, participants in the competition will have the opportunity to present their algorithms used in the competition in oral presentations. The workshop will finish with a presentation of the results of this year's contestants, and an analysis if they have changed their approaches or not. The workshop will close with an open panel discussion on the competition design, outcomes, and suggestions for future competitions.
    Abstract The Neural Network Forecasting Grand Challenge requires participants to forecast one, two or more datasets of a selection of 6 datasets (each containing 11 time series) on transportation data as accurately as possible, using methods from computational intelligence and applying a consistent methodology. The data consists of 6 datasets with 11 time series with different time frequencies, including yearly, quarterly, monthly, weekly, daily and hourly transportation data. Transportation is considered as a prerequisite to economy prosperity, mobility and wellbeing in a civilized world, in addition to providing one of the largest service sectors worldwide. Forecasting time series of transportation demand and flows, including airline, rail and car passenger traffic, provides a number of challenges: data may be measured at different time frequencies. Depending on the time frequency, the data may contain a number of time series patterns including none to multiple overlying seasonality, local trends, structural breaks, outliers, zero and missing values etc. These are often driven by a combination of unknown and unobserved causal forces driven by the underlying yearly calendar, such as reoccurring seasonal periods, bank holidays, or special events of different length and magnitude of impact, with different lead and lag effects.

    We seek to evaluate the accuracy of computational intelligence (CI) methods in time series forecasting, extending the earlier NN3 & NN5 competitions unto a new set of data of multiple frequencies. We seek to determine progress in modeling CI for forecasting & to disseminate knowledge on "best practices" across time series of different frequencies. To facilitate and knowledge exchange, the competition will be run in 3 separate tournaments of 6 months each. In each tournament one, two or more of the 6 datasets of 11 time series each with a particular time frequency must be forecasted. To extend the task across the year, the datasets in each tournament round will be released sequentially 2 at a time. The contestants will use a consistent methodology within each tournament, but will be allowed to change their methodology between tournaments.

    The prediction competition is open to all methods of computational intelligence, incl. feed-forward and recurrent neural networks, fuzzy predictors, evolutionary & genetic algorithms, decision & regression tress, support vector regression, hybrid approaches etc. used in all areas of forecasting, prediction & time series analysis. We also welcome submission of statistical methods as benchmarks, but they are not eligible to "win" the NN GC.
       

    Future Perspectives of Neuromorphic Memristor Science & Technology

    2pm - 5pm, August 4th, 2011, Doubletree Hotel San Jose
    Chairs Robert Kozma, University of Memphis & AFRL Sensors Directorate
    Robinson Pino, AFRL Information Directorate
    Abstract In 2008 scientists from Hewlett-Packard discovered a nano-scale device called the memristor, a hypothetical circuit element predicted in 1971 by Leon Chua, UC Berkeley. This has generated unprecedented worldwide interests because, among many applications, memristors can be used as super-dense non-volatile memories for building instant turn-on computers. It is not overstatement to say that memristor technology is expected to revolutionize computer technology and engineering in the coming decades. Our workshop explores neuromorphic implications of memristor technology. There are suggestions from many brain researchers that memristor-based analog memory can be used to build brain-like learning machines with nano-scale memristive synapses. The workshop will discuss the following topics:
  • Memristive hardware devices affording neuromorphic implementations;
  • Adaptation and learning algorithms benefiting from memristive capabilities;
  • Hybrid wetware-hardware technologies for in-vitro experiments;
  • Core application areas that provide possible benchmarks for the novel technology;
  • Broader implications of memristors and transformational changes to the society.
  •    
       

    Additional Sponsors


    NSF Cognimem
    University of Cincinnati Toyota


    Copyright © all logos and seals are owned by their respective societies or institutions.

    Image courtesy of the San Jose Convention & Visitors Bureau

    website hit counters gallery