Previsões para Deep Learning para 2017

Por James Kobielus da IBM.

(…)The first hugely successful consumer application of deep learning will come to market: I predict that deep learning’s first avid embrace by the general public will come in 2017. And I predict that it will be to process the glut of photos that people are capturing with their smartphones and sharing on social media. In this regard, the golden deep-learning opportunities will be in apps that facilitate image search, auto-tagging, auto-correction, embellishment, photorealistic rendering, resolution enhancement, style transformation, and fanciful figure inception. (…)

(…)A dominant open-source deep-learning tool and library will take the developer community by storm: As 2016 draws to a close, we’re seeing more solution providers open-source their deep learning tools, libraries, and other intellectual property. This past year, Google open-sourced its DeepMind and TensorFlow code, Apple published its deep-learning research, and the OpenAI non-profit group has started to build its deep-learning benchmarking technology. Already, developers have a choice of open-source tools for development of deep-learning applications in Spark, Scala, Python, and Java, with support for other languages sure to follow. In addition to DeepMind and TensorFlow, open tools for deep-learning development currently include DeepLearning4J, Keras, Caffe, Theano, Torch, OpenBLAS and Mxnet.(…)

(…)A new generation of low-cost commercial off-the-shelf deep-learning chipsets will come to market: Deep learning relies on the application of multilevel neural-network algorithms to high-dimensional data objects. As such, it requires the execution of fast-matrix manipulations in highly parallel architectures in order to identify complex, elusive patterns—such as objects, faces, voices, threats, etc. For high-dimensional deep learning to become more practical and pervasive, the underlying pattern-crunching hardware needs to become faster, cheaper, more scalable, and more versatile. Also, the hardware needs to become capable of processing data sets that will continue to grow in dimensionality as new sources are added, merged with other data, and analyzed by deep learning algorithms of greater sophistication. (…)

(…)The algorithmic repertoire of deep learning will grow more diverse and sophisticated: Deep learning remains a fairly arcane, specialized, and daunting technology to most data professionals. The growing adoption of deep learning in 2017 will compel data scientists and other developers to grow their expertise in such cutting-edge techniques as recurrent neural networks, deep convolutional networks, deep belief networks, restricted Boltzmann machines, and stacked auto-encoders. (…)