During the 1980s the field of NLP experienced a major shift toward machine learning. Before then, programs were built on algorithms based on complex, hard-coded rules. The rise of machine learning saw NLP use more computing-heavy statistical models in conjunction with the availability of large volumes of textual data.
Yet again, machine translation was one of the early adopters of statistical frameworks for natural language processing. Other fields of NLP research would also come to discard the older rules-based methods in favor of statistical models.
How NLP is Used Today
Natural language processing has come a long way since the denmark mobile database creation of the Turing test. And while machines have yet to pass it, there are many new use cases for natural language processing that have developed especially in recent years.
With the rise of deep learning and neural network technology, the machine learning possibilities of NLP have increased exponentially, and the kind of language work it can do has become increasingly complex.
Google’s LaMDA represents the most sophisticated use case for NLP yet. It’s definitely a far cry from the more limited applications of ELIZA, and there have been many successive developments already in between.
Already, NLP tech can be used to generate fiction in the style of certain authors, and more advanced programs such as DALL-E that generate images based on text prompts. Some NLP programs are even able to generate code and create simple video games!