in Uncategorized

Model explainability will be a security issue

Ben Lorica talks security in terms of Software Engineering but at least for me the most important aspect of security in Machine Learning in the future it’s the model explainability where he says:

Model explainability has become an important area of research in machine learning. Understanding why a model makes specific decisions is important for several reasons, not the least of which is that it makes people more comfortable with using machine learning. That “comfort” can be deceptive, of course. But being able to ask models why they made particular decisions will conceivably make it easier to see when they’ve been compromised. During development, explainability will make it possible to test how easy it is for an adversary to manipulate a model, in applications from image classification to credit scoring. In addition to knowing what a model does, explainability will tell us why, and help us build models that are more robust, less subject to manipulation; understanding why a model makes decisions should help us understand its limitations and weaknesses. At the same time, it’s conceivable that explainability will make it easier to discover weaknesses and attack vectors. If you want to poison the data flowing into a model, it can only help to know how the model responds to data.

This discussion it’s very important as we’re having here in Europe a huge government movement against lack of explainability in the way algorithms works and how automated decisions are made.

Write a Comment


This site uses Akismet to reduce spam. Learn how your comment data is processed.