Go to the profile of RE•WORK

Limited pre-sale tickets are live for the 2020 edition of the world’s biggest Deep Learning Summit in San Francisco. Save over 60% on your pass and join RE•WORK next Jan 30–31.

Frequent speakers include AI Pioneers Yoshua Bengio, Yann LeCun and Geoffrey Hinton, as well as global experts Ian Goodfellow, Chelsea Finn, Doina Precup, Hugo Larochelle and many more. There are limited pre-sale tickets available until Friday 28, and they will sell out. Register now.

Take a look at some of the presentations from last year’s edition of the summit to find out what to expect:

Jeff Clune, Senior Research Scientist & Founding Member at Uber AI Labs

Go-Explore: A New Type of Algorithm for Hard-exploration Problems

A grand challenge in reinforcement learning is producing intelligent exploration, especially when rewards are sparse or deceptive. Jeff presented Go-Explore, a new algorithm for such ‘hard exploration problems.’ Go-Explore dramatically improves the state of the art on benchmark hard-exploration problems, enabling previously unsolvable problems to be solved. Jeff explained the algorithm and the new research directions it opens up. He also explained why they believe it will enable progress on previously unsolvable hard-exploration problems in a variety of domains, especially the many that harness a simulator during training (e.g. robotics). More information can be found at

Karol Hausman, Research Scientist & PhD Student at Google Brain & University of Southern California

Latent Structure in Deep Robotic Learning

Traditionally, deep reinforcement learning has focused on learning one particular skill in isolation and from scratch. This often leads to repeated efforts of learning the right representation for each skill individually, while it is likely that such representation could be shared between different skills. In contrast, there is some evidence that humans reuse previously learned skills efficiently to learn new ones, e.g. by sequencing or interpolating between them. 
In this talk, Karol demonstrated how one could discover latent structure when learning multiple skills concurrently. In particular, he presented a first step towards learning robot skill embeddings that enable reusing previously acquired skills. He showed how one can use these ideas for multi-task reinforcement learning, sim-to-real transfer and imitation learning.

Yixuan Li, Research Scientist at Facebook AI (Computer Vision Group)

Advancing State-of-the-art Image Recognition with Deep Learning on Hashtags

At Facebook everyday hundreds of millions of users interact with billions of visual contents. By understanding what’s in an image, the systems can help connect users with the things that matter most to them. To improve our recognition system, Yixuan spoke about two main research challenges: how they train models at the scale of billions, and how they improve the reliability of the model prediction. Since current models are typically trained on data that are individually labeled by human annotators, scaling up to billions is non-trivial. Yixuan solves the challenge by training image recognition networks on large sets of public images with user-supplied hashtags as labels. By leveraging weakly supervised pretraining, their best model achieved a record-high 85.4% accuracy on ImageNet dataset.

Keen to learn from more global experts? Register now and save over $1000 on your pass.

Source: Artificial Intelligence on Medium