NIPS 2016

NIPS was a blast. Some great invited keynotes, talks, workshops & tutorials plus some nice demos. I particularly enjoyed the talks on Generative Adversarial Networks, Program Synthesis /Differentiable Neural Computers, Bayesian DL. The keynotes were all awesome. I particularly enjoyed talks by Alex Graves, Ian Goodfellow, Josh Tennenbaum, Zoubin Ghahramani, Vinyals, Yann Lecun, Jurgen Schmidhuber & Nando deFreitas. Look out for these on youtube. I am sure I missed some names here..

Surprisingly not so much on autonomous vehicles, although there was one workshop and few booths and demos.

Most of the material will be available on video and the slides are already available. Here are some blogs with coverage on NIPS 2016. I have left out the big corporate blogs that cover their contributions.

Generative Adversarial Networks are the hotness at NIPS 2016

Language Translation at scale

Google has a paper out on language translation at scale using deep neural nets. What is interesting about this paper is that the model architecture allows for training with a set of language pairs eg (english, german), ( french, italian), (english, chinese ) , (japanese, korean), (chinese, japanese ) and so on.. but while inferencing, we can also get the answers for an unseen pair for example give it a chinese text and a target language of german and get the translation. In a more traditional approach, we could 1) train separately for each pair or 2) train towards an intermediate representation (which could be english or a common language itself) . The advantage of this approach is that we can get the best out of all the available language pairs. The cool aspects of the paper are that we can get 0-shot learning for an unseen pair and there seems to be hints at an intermediate language being represented within the neural nets.

Wonder what the Skype universal translator does..

Will be great to see this for the set of Indian languages too!

here is the Google paper


Image Translation..

Back after a few months. I will try to be more regular from now on.. NIPS is coming up in a week or so. should be quite exciting. here is the schedule papers

here is an interesting paper from a week back.

Image-to-Image Translation with Conditional Adversarial Networks from Berkeley – Isola, Efros and team. Very nice work using Conditional GANs with a conditional GAN objective & an L1 term to provide a general framework that works for a set of image translation problems (day -> night, sketch -> photo, segmentation map -> image , b&w -> color , etc ) . They use U-nets for generation to exploit the structural similarities between input & output pair, and a “patchGAN” for the discriminator. Code is also available in Lua on Github. The effect/importance of the use of noise in this GAN implementation is unclear to me..

Deep Learning Hardware – TPUs from Google

At Google I/O this May, Google announced a custom asic for machine learning. And that they have been using it in the data center for about  a year and for well known applications including Search, Maps and also the celebrated win over Lee Sedol at GO.  Turns out that it is likely 8 bit doing fixed point arithmetic that gives it much better performance per watt.

Looking forward to see what Nervana comes out with, and what Intel has up it’s sleeve or Qualcomm!