in Uncategorized

NLP it’s still an open problem, no matter how the tech giants sell it to you…

This post from Ana Marasović tells why. Here a small sample:

We should use more inductive biases, but we have to work out what are the most suitable ways to integrate them into neural architectures such that they really lead to expected improvements.
We have to enhance pattern-matching state-of-the-art models with some notion of 
human-like common sense that will enable them to capture the higher-order relationships among facts, entities, events or activities. But mining common sense is challenging, so we are in need of new, creative ways of extracting common sense.
Finally, we should deal with 
unseen distributions and unseen tasks, otherwise “any expressive model with enough data will do the job.” Obviously, training such models is harder and results will not immediately be impressive. As researchers we have to be bold with developing such models, and as reviewers we should not penalize work that tries to do so.
This discussion within the field of NLP reflects a larger trend within AI in general—reflection on the flaws and strengths of deep learning. Yuille and Liu wrote an opinion titled 
Deep Nets: What have they ever done for Vision? in the context of vision, and Gary Marcus  has long championed using approaches  beyond  deep  learning
 for AI in general. It is a healthy sign that AI researchers are very much clear eyed about the limitations of deep learning, and working to address them.

Today we have at maximum a good way to perform word frequencies and some structural language analysis using NLP (in practical terms). For the rest, we’re far away from the human capacity.

Write a Comment

Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.