ProjectBlog: ICLR 2019 | MILA, Microsoft, and MIT Share Best Paper Honours

Blog: ICLR 2019 | MILA, Microsoft, and MIT Share Best Paper Honours

Go to the profile of Synced

The Seventh International Conference on Learning Representations (ICLR) kicked off today. One of the world’s major machine learning conferences, ICLR this year received 1591 main conference paper submissions — up 60 percent over last year — and accepted 24 for oral presentations and 476 as poster presentations.

The Best Paper winners are Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks from the Montreal Institute for Learning Algorithms (MILA) and Microsoft Asia Research; and The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks from the Massachusetts Institute of Technology (MIT).

Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks proposes a new inductive bias for recurrent neural networks (RNN) called ordered neurons. Researchers Yikang Shen, Shawn Tan, Alessandro Sordoni and Aaron Courville developed a novel recurrent unit, the ON-LSTM, and a new activation function “cumax” based on the idea. The research aims to integrate tree structures into RNN by separately allocating hidden state neurons with long and short-term information.

Abstract: Natural language is hierarchically structured: smaller units (e.g., phrases) are nested within larger units (e.g., clauses). When a larger constituent ends, all of the smaller constituents that are nested within it must also be closed. While the standard LSTM architecture allows different neurons to track information at different time scales, it does not have an explicit bias towards modeling a hierarchy of constituents. This paper proposes to add such inductive bias by ordering the neurons; a vector of master input and forget gates ensures that when a given neuron is updated, all the neurons that follow it in the ordering are also updated. Our novel recurrent architecture, ordered neurons LSTM (ON-LSTM), achieves good performance on four different tasks: language modeling, unsupervised parsing, targeted syntactic evaluation, and logical inference.

In The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks, authors Jonathan Frankle and Michael Carbin explore the lottery ticket hypothesis, which proposes that randomly-initialized, dense neural networks containing subnetworks (referred as “winningtickets”)can train to similar accuracy as their original networks at a similar speed. The researchers introduce an algorithm that can identify and demonstrate the existence of such winning tickets, with the goal of improving training performance and network design while enhancing the theoretical understanding of neural networks.

Abstract: Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance. We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the “lottery ticket hypothesis:” dense, randomly-initialized, feed-forward networks contain subnetworks (“winning tickets”) that — when trained in isolation — reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective. We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10–20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy.

ICLR 2019 runs through Thursday, May 9 at the Ernest N. Morial Convention Center in New Orleans. According to Facebook Chief AI Scientist Yann LeCun, who co-created ICLR in 2012, over 3000 people are expected to attend the four-day conference.

Journalist: Tony Peng | Editor: Michael Sarazen

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.

Follow us on Twitter @Synced_Global for daily AI news!

We know you don’t want to miss any stories. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

Source: Artificial Intelligence on Medium

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top

Display your work in a bold & confident manner. Sometimes it’s easy for your creativity to stand out from the crowd.