AI Researchers Launch SuperGLUE, a Rigorous Benchmark For Language Understanding

Facebook AI Research, together with Google’s DeepMind, University of Washington, and New York University, today introduced SuperGLUE, a series of benchmark tasks to measure the performance of modern, high performance language-understanding AI. From a report: SuperGLUE was made on the premise that deep learning models for conversational AI have “hit a ceiling” and need greater challenges. It uses Google’s BERT as a model performance baseline. Considered state of the art in many regards in 2018, BERT’s performance has been surpassed by a number of models this year such as Microsoft’s MT-DNN, Google’s XLNet, and Facebook’s RoBERTa, all of which were are based in part on BERT and achieve performance above a human baseline average. SuperGLUE is preceded by the General Language Understanding Evaluation (GLUE) benchmark for language understanding in April 2018 by researchers from NYU, University of Washington, and DeepMind. SuperGLUE is designed to be more complicated than GLUE tasks, and to encourage the building of models capable of grasping more complex or nuanced language.

Share on Google+

View source

Codice amico Very Mobile Diagonal Media Digital Marketing