Google’s Gated Multi-Layer Perceptron Outperforms Transformers Using Fewer Parameters


Researchers at Google Brain have announced Gated Multi-Layer Perceptron (gMLP), a deep-learning model that contains only basic multi-layer perceptrons. Using fewer parameters, gMLP outperforms Transformer models on natural-language processing (NLP) tasks and achieves comparable accuracy on computer vision (CV) tasks.
Read more at InfoQ…

Discover more from Emsi's feed

Subscribe now to keep reading and get access to the full archive.

Continue reading