The 34th International Conference on Machine Learning (ICML) in 2017, took place in Sydney, Australia on August 6-11. ICML is one of the most prestigious conferences in machine learning which covers a wide range of topics in machine learning both from practical and theoretical views. This year it brought together researchers and practitioners from machine learning, for 434 talks in 9 parallel tracks, 9 tutorial sessions, 22 workshops. We were there!
Machine learning is about learning from historical data. There are three distinct ways that machine learning systems can learn: supervised learning, unsupervised learning, reinforcement learning. Recently, there has been a tremendous success in machine learning due to the end-to-end training capabilities of deep neural networks – so called deep learning – for learning both prediction representations and parameters at the same time. Deep learning architectures were originally developed for supervised learning problems such as classifications but recently have been extended into a wide range of problems from regression in supervised learning to unsupervised learning domains, such as generative methods (such as GANs) as well as reinforcement learning (deep RL).
As expected, this year, deep learning was one the hottest topics along with other topics such as continuous optimization, reinforcement learning, GANs, as well as online learning. Deep learning continues to be a very active research area – over 20% of all sessions were devoted to this area. Here is a selection of our observations, topics and papers that captured our attention.
Deep learning
The main widely discussed new challenges for deep learning were: transfer learning, attention and memory. There was a heavy emphasize on understanding of how and why deep learning works. There were some papers and workshops trying to address some theoretical aspects, in order to enhance understandings and interpret the results which is crucial for many real-world applications. For example, there were special workshops devoted to visualization for deep learning or interpretability in machine learning and a lot of results of studies on the interpretability of predictive models, developing methodology to interpret black-box machine learning models or even developing interpretable machine learning algorithms (e.g., architectures designed for interpretability).
The theory still seems to be far away from the point that can explain the effectiveness of current deep learning solutions. On this subject, the paper Sharp Minima Can Generalize For Deep Nets explains why the widely standing hypothesis: “flatness of local minimum of the objective function, found by the stochastic gradient-descent method, results in a good generalization”, is not necessarily the case for deep nets and sharp minima can also generalize well.
At Criteo, we use deep learning for product recommendation and user selection.
Read more about our presence at ICML on our research blog.
-
Our lovely Community Manager / Event Manager is updating you about what's happening at Criteo Labs.
See DevOps Engineer roles