David B. Fogel
Natural Selection, Inc., San Diego, CA
http://www.natural-selection.com/people_dfogel.html
Evolutionary computation is a machine learning approach that seeks inspiration from nature's processes of natural selection and variation. The field of evolutionary computation is broad, and encompasses many different inspirations from nature that include modeling at the species, individual, and genetic levels. Theoretical results for the learning properties of these algorithms have been offered, although some have been reanalyzed and corrected within the last decade. Evolutionary individual-based models are simulations that incorporate individual purpose-driven agents that are subject to natural selection and variation. It is possible to employ these models to solve problems in industry and other disciplines, but also to potentially gain insight in ecologies and animal behavior. In particular, aspects of evolutionary game theory can be compared to evolutionary individual-based modeling. The results from these two approaches are often quite different. Several results will be offered that highlight these differences. It is of interest to determine which may have greater fidelity in predicting aspects of the real world.
Joe Halpern
Cornell University
http://www.cs.cornell.edu/home/halpern/
Nash equilibrium is the most commonly-used notion of equilibrium in game theory. However, it suffers from numerous problems. Some are well known in the game theory community; for example, the Nash equilibrium of repeated prisoner's dilemma is neither normatively nor descriptively reasonable. However, new problems arise when considering Nash equilibrium from a computer science perspective: for example, Nash equilibrium is not robust (it does not tolerate "faulty" or "unexpected" behavior), it does not deal with coalitions, it does not take computation cost into account, and it does not deal with cases where players are not aware of all aspects of the game. In this talk, I discuss solution concepts that try to address these shortcomings of Nash equilibrium. This talk represents joint work with various collaborators, including Ittai Abraham, Danny Dolev, Rica Gonen, Rafael Pass, and Leandro Rego. No background in game theory will be presumed.
Kristen Grauman
The University of Texas
http://www.cs.utexas.edu/~grauman/
How should an agent learn about visual objects? Object recognition techniques typically follow a one-time, one-pass learning pipeline: given some manually labeled exemplars, they train a model per category, and then can identify those same objects in novel images. While effective on prepared datasets, the strategy is not scalable and assumes a fixed category domain. We instead consider visual learning as a continuous process, in which the algorithm constantly analyzes unlabeled image data in order to both strengthen and expand its set of category models. In this talk, I present an approach that actively seeks human annotators’ help when it is most needed, and autonomously discovers novel objects by mining new data. I show how to address important technical challenges in large-scale active visual learning, such as accounting for the information/effort tradeoff inherent to annotation requests, surveying massive unlabeled data pools, and targeting questions to many annotators working in parallel. Finally, I show how the system can more effectively discover novel objects in the context of those that were previously taught, pacing itself according to the predicted difficulty of the tasks. The proposed techniques yield state-of-the-art object detection results, and offer a new view of visual object learning as an interactive and ongoing process.
Pierre-Yves Oudeyer
INRIA, Bordeaux Sud-Ouest
http://www.pyoudeyer.com/
Developmental robotics aim at building robots which, once "out of the factory" and in the "wild" of the real-world, should be capable of learning cumulatively an open-ended repertoire of new skills, both through self-experimentation and social interaction with humans. A major challenge that has to be faced is that the sensorimotor spaces encountered by such robots, including the interaction of their body with novel external objects and persons, are high-volume, high-dimensional, unbounded and partially unlearnable. If one wants robots to be capable of efficient learning in such spaces, one must take inspiration from infant development which shows the importance of various families of developmental constraints. In this talk, I will review several of these constraints, including mechanisms for curiosity-driven learning, maturation, sensorimotor primitives, joint attention and joint intention in social guidance, self-organization, and morphological computation, and show how they can allow to transform apparently daunting machine learning problems into much more tractable problems.