Para quem acompanha o debate de tecnologia através da academia, indústria, conferências, e na mídia já percebeu que a Inteligência Artificial (AI) e suas subáreas são os assuntos mais quentes no momento.
In a very insightful article made by David Talby he discuss about the fact that in a second that a Machine Learning goes to production, actually this model starts degradate itself because the model contact with the reality, where the author uses the following statement: The key is that, in contrast to a calculator, your ML system does interact with the real world.
Most of the time we completely rely in the default parameters of Machine Learning Algorithm and this fact can hide that sometimes we can make wrong statements about the ‘efficiency’ of some algorithm.
In one experiment using a very large text database I got at the end of training using train_supervised()in FastText a serialized model with more than 1Gb.
Shirin Glander made a great post about how to use Plumber to provide some servicing of R models an API.