One of the determinants for a good anomaly detector is finding smart data representations that can easily evince deviations from the normal distribution. Traditional supervised approaches would require a strong assumption about what is normal and what not plus a non negligible effort in labeling the training dataset. Deep auto-encoders work very well in learning high-level abstractions and non-linear relationships of the data without requiring data labels. In this talk we will review a few popular techniques used in shallow machine learning and propose two semi-supervised approaches for novelty detection: one based on reconstruction error and another based on lower-dimensional feature compression.
Data Scientist with proven experience of building machine learning products across different industries. Currently leading the AI team at Helixa. Co-author of the book "Python Deep Learning", contributor to the “Professional Manifesto for Data Science” and founder of the DataScienceMilan.org community. My favorite hobbies include home cooking, martial arts, and exploring the surrounding nature while traveling by motorcycle. View all posts by Gianmario