Ai robotic technology Free Vector

NLP programming is constantly evolving. This is evident by the fact that just recently, Google AI released the Reformer – a new Natural Language Processing model. Reformer is based off of Transformer, and can be considered a big improvement. A little background: Google AI released Transformer back in 2017, and was already revolutionary. It was a big step towards large language models for NLP. Google’s BERT (Bidirectional Encoder Representations from Transformers),a neural networking technique that uses NLP algorithms, is based on Transformer. Still, BERT is only able to read hundreds of words as context – for obvious reasons, this is too little.

Not for the Reformer.

The Reformer is highly computationally efficient for fine-tuning NLP tasks, being able to deal easily with 10,000 words as context. BERT, on the other hand, had difficulties with it being simply compute intensive, despite it being used for merely fine-tuning tasks. The basic idea behind the computing performance of Reformer is that its computing attention is narrowed down, using locality-sensitive hashing. Since its introduction, Reformer has been hailed as a “being the new basis for deepl language models”, and for a good reason.

Why this matters

Having the ability to read a wide context frame, Reformer can go beyond standard Natural Language Processing, summarizing big portions of text instantly, and even be developed to work outside of text, f.e musical notes to read and play the music just from scanning the notes and to work with videos as well. This would be completely revolutionary, as NLP has been used to work with text only, up until this point.
Additionally, Reformer has the ability to train powerful language models on just one GPU and using  a mere 16GB of memory, making it easier for wider use.
Such an efficient model, doubled with easier availability, will be a big game-changer and push the evolution of NLP even further.