fb-xlmr-neural-net-nov-2019.jpg
Facebook's giant "XLM-R" neural network is engineered to work word problems across 100 different languages, including Swahili and Urdu, but it runs up against computing constraints even using 500 of Nvidia's world-class GPUs. Tiernan Ray for ZDNet

With a trend to bigger and bigger machine learning models, state-of-the-art artificial intelligence research continues to run up against the limits of conventional computing technology. 

That's one outcome of the latest mammoth piece of work by researchers at Facebook's AI team. Last week they published a report on their invention, XLM-R, a natural language model based on the wildly popular Transformer model from Google. 

The paper, Unsupervised Cross-lingual Representation Learning at Scale, posted on arXiv[1], is authored by Alexis Conneau, Kartikay Khandelwal Naman, Goyal Vishrav, Chaudhary Guillaume, Wenzek Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov, all with Facebook AI Research.

XLM-R is engineered to be able to perform translations between one hundred different languages. It builds upon work that Conneau did earlier this year with Guillaume Lample at Facebook[2], the creation of the initial XLM. It's most similar, they write, to a system shown earlier this year by Google researchers[3] that did cross-lingual training on 103 languages. 

Also: Facebook open sources tower of Babel, Klingon not supported[4]

It's a big improvement over those prior efforts on various benchmark tasks like question answering between languages. It makes intriguing progress, in particular, with what are called "low-resource" languages, ones that don't have a lot of textual material for them, such as Swahili and Urdu. 

But XLM-R runs into resource constraints despite using five hundred of Nvidia's most powerful GPUs. The authors refer to the "curse of multilinguality." As you stuff

Read more from our friends at ZDNet