Some weeks ago during a security training for developers provided by Marcus from Hackmanit (by the way, it’s a very good course that goes in some topics since web development until vulnerabilities of NoSQL and some defensive coding) we discussed about some white box attacks in web applications (e.g.attacks where the offender has internal access in the object) I got a bit curious to check if there’s some similar vulnerabilities in ML models. After running a simple script based in ,, using Scikit-Learn, I noticed there’s some latent vulnerabilities not only in terms of objects but also in regarding to have a proper security mindset when we’re developing ML models. But first let’s check a simple example.
Introdução Como o Campeonato Brasileiro terminou oficialmente neste domingo com o Flamengo campeão e com todas as rodadas encerradas, vamos novamente realizar a mesma pergunta que inicei no meu post que indaga: “O Campeonato Brasileiro está ficando mais injusto?“ Mais uma vez fui na Wikipedia, e atualizei os dados já incluindo o ano de 2019.
Uma análise exploratória usando o Coeficiente de Gini Todos os dados e a análise completa pode ser encontrada no repositório brasileirao-gini.
Disclaimer: some of the information in this blog post might be incorrect and as FastText it’s very fast-paced to correct and adjust things probably some parts of this post may be can be out-of-date very soon too.
New tool in the market to help in hyperparameter tuning as a service called Keras Tuner.They are accepting some people to be beta users of this tool.
Most of the time we completely rely in the default parameters of Machine Learning Algorithm and this fact can hide that sometimes we can make wrong statements about the ‘efficiency’ of some algorithm.