MetaReview: The paper addresses the robust out-of-distribution (OOD) detection problem. The goal is to learn a detector separating the normal data distribution, represented by a multi-class labelled training-set, from out-of-distribution data by leveraging auxiliary outlier data available at training time. With this aim the paper presents ATOM, an adversarial training procedure that exploits selected outlier examples during learning to tighten the decision boundary around normal data. The selection process, called informative outlier mining, consists in taking the outlier examples where the detector exhibits uncertainty. The above selection step is essentially the main novelty of the proposal with respect to existing approaches, which instead consider only randomly selected outlier samples during learning. Despite its simplicity, experiments highlighted that the above modification can lead to more robust detection in different settings and to improve detection results in comparison with a number of state-of-the-art OOD methods. Moreover, the theoretical analysis sheds some lights on the importance of informative outlier mining. The paper is generally well written and easy to follow and presents an interesting simple strategy in the context of the OOD problem substantiated by sufficient experimental and theoretical analysis.