Security in Machine Learning Engineering: A white-box attack and simple countermeasures

Some weeks ago during a security training for developers provided by Marcus from Hackmanit (by the way, it’s a very good course that goes in some topics since web development until vulnerabilities of NoSQL and some defensive coding) we discussed about some white box attacks in web applications (e.g.attacks where the offender has internal access in the object) I got a bit curious to check if there’s some similar vulnerabilities in ML models.  After running a simple script based in [1],[2],[3] using Scikit-Learn, I noticed there’s some latent vulnerabilities not only in terms of objects but also in regarding to have a proper security mindset when we’re developing ML models.  But first let’s check a simple example.

Machine Learning Tetrad = Business Knowledge + Statistical Understanding + ML Algos + Data

In the post called Learning Market Dynamics for Optimal Pricing post of Sharan Srinivasan he talks about how AirBnb uses ML and Structural Modeling (Mathematical + Statistical Modelling) combined to get some results about the offer to guests the optimal pricing based in market dynamics based in the anticipation of the booking and the difference the time between the booking date until the check-in (also know as Lead Time).