Language Translation at scale

Google has a paper out on language translation at scale using deep neural nets. What is interesting about this paper is that the model architecture allows for training with a set of language pairs eg (english, german), ( french, italian), (english, chinese ) , (japanese, korean), (chinese, japanese ) and so on.. but while inferencing, we can also get the answers for an unseen pair for example give it a chinese text and a target language of german and get the translation. In a more traditional approach, we could 1) train separately for each pair or 2) train towards an intermediate representation (which could be english or a common language itself) . The advantage of this approach is that we can get the best out of all the available language pairs. The cool aspects of the paper are that we can get 0-shot learning for an unseen pair and there seems to be hints at an intermediate language being represented within the neural nets.

Wonder what the Skype universal translator does..

Will be great to see this for the set of Indian languages too!

here is the Google paper



Image Translation..

Back after a few months. I will try to be more regular from now on.. NIPS is coming up in a week or so. should be quite exciting. here is the schedule papers

here is an interesting paper from a week back.

Image-to-Image Translation with Conditional Adversarial Networks from Berkeley – Isola, Efros and team. Very nice work using Conditional GANs with a conditional GAN objective & an L1 term to provide a general framework that works for a set of image translation problems (day -> night, sketch -> photo, segmentation map -> image , b&w -> color , etc ) . They use U-nets for generation to exploit the structural similarities between input & output pair, and a “patchGAN” for the discriminator. Code is also available in Lua on Github. The effect/importance of the use of noise in this GAN implementation is unclear to me..