On Successes and Failures of Deep Learning. Invited talk at JDS 2017.
Failures of Deep Learning. At the Representation learning workshop, Simons Institute, Berkeley, 2017.
Safe, Multi-Agent, Reinforcement Learning for Autonomous Driving. Invited talk at ELSC Conference: Information, Control, and Learning - The Ingredients of Intelligent Behavior, 2016.
Engineering by Machine Learning. Invited talk at the Technion, 2016.
Practically Relevant Theory and Theoretically Relevant Practice of Deep Learning. Keynote talk at DALI 2016
On the Computational Complexity of Deep Learning. Invitaed Talk, given at the
Minerva Weizmann Workshop on Computational Challenges in Large Scale Image Analysis, 2015, and at
Optimization and Statistical Learning workshop, 2015.
Accelerating Stochastic Optimization. Invited talk, given at the Tel-Aviv Deep Learning Master Class.
The sample-computational tradeoffs. Invitaed Talk, given at the
Multi-Trade-offs
in Machine Learning NIPS 2012 workshop, and at the Optimization
and statistical learning workshop, Les Houches, France, 2013.
Stochastic Dual Coordinate Ascent Methods
for Regularized Loss Minimization. At the Optimization for Machine
Learning workshop, NIPS, 2012.
Learnability beyond uniform
convergence. Keynote talk at ALT 2012.
Using more data to speed-up training time. Invitaed Talk, given at the
COST Workshop, NIPS, 2011.
Learnability beyond uniform
convergence. Invitaed Talk, given at the Learning theory workshop, FOCM, Budapest,
July 2011, and at Dagstuhl seminar, Mathematical and Computational Foundations of Learning Theory.
What else can we do with more
data?. ITA workshop, UCSD, 2011.
Machine Learning in the Data Revolution Era. Invitaed Talk, Google \& University of Waterloo, Canada, July, 2009.
Efficient learning with partial information on each individual example. Weizmann seminar, January, 2009.
On the tradeoff between computational complexity and sample complexity
in learning. CS seminar, The Hebrew university, December, 2009.
Overview of Compressed
Sensing. Basic Notions Seminar, The Einstein Institute of
Mathematics, The Hebrew university, December, 2009.
Trading regret rate for computational
efficiency in online learning with limited feedback. On-line
Learning with Limited Feedback, Joint ICML/COLT/UAI Workshop, 2009.
Littlestone's Dimension and Online
Learnability. UCSD workshop, 2009.
The Duality of Strong Convexity and Strong Smoothness:
Applications to Machine Learning. AFOSR workshop, 2009.
Trading Accuracy for Sparsity. UIC, 2008.
Large-Scale SVM Optimization: Taking a
Machine Learning Perspective.NEC labs, Princeton, 2008.
Tutorial on Online Learning (ICML 2008).
Online Prediction, Low Regret, and Convex Duality. GIF Workshop, Tubingen, May 15-16, 2008.
Online Prediction: The Role of Convexity and Randomization. Learning Club, The Hebrew University, 2008.
Low \ell_1 Norm and Guarantees on Sparsifiability. Sparse Optimization and Variable Selection, Workshop, ICML/COLT/UAI, July, 2008.