Sumario: | Security isn't considered a high priority when it comes to machine learning systems. But given the speed of innovation in this area, the rapid advances in ML present a whole new set of security risks that are quite different from those of traditional software. This report reviews known security risks for ML systems and examines why security in this area is particularly important today. Catherine Nelson, principal data scientist at SAP Concur, describes techniques to enhance security, increase privacy, and mitigate attacks that do occur on ML systems. By defining what's meant by secure, she examines whether the techniques now available are sufficient to achieve true security in ML systems. This report is ideal for ML engineers, data scientists, and managers of ML teams. Learn key points in the machine learning lifecycle when security becomes particularly important Get an overview of known security risks, including transfer learning, model theft, model inversion, and membership inference attacks Mitigate security risks using audits and governance, model monitoring, data checks and balances, and general security practice.
|