Threat Detection in X-ray Baggage Security Imagery Using Convolutional Neural Networks


Altindag E. E., YÜKSEL ERDEM S. E.

Conference on Anomaly Detection and Imaging with X-Rays (ADIX) VII, ELECTR NETWORK, 3 April - 12 June 2022, vol.12104 identifier identifier

  • Publication Type: Conference Paper / Full Text
  • Volume: 12104
  • Doi Number: 10.1117/12.2622373
  • Country: ELECTR NETWORK
  • Keywords: X-ray imagery, convolutional neural network, object detection, baggage screening, transfer learning
  • Hacettepe University Affiliated: Yes

Abstract

X-ray security screening has become crucial to maintaining safety in public spaces. Hence, X-ray screening equipment is widely used in airports, shopping centers, etc. to prevent the transportation of harmful objects. However, these equipment are not capable of detecting threats without human labor. In recent years, automatic threat detection in baggage has been studied and several methods have been offered on X-ray images. In this study, we introduce a publicly-available single view dual-channel X-ray dataset, called the HUMS X-ray dataset, emphasizing the efforts of Hacettepe University and MS Spektral Inc. This dataset includes both the low energy, high energy, as well as the false-colored images of knife threats in baggage under complex scenarios such as occlusion. Then, we detect the threats in both the HUMS dataset and the SIXray datasets using architectures based on Convolutional Neural Network (CNN) techniques. Three popular object detection algorithms, namely the Faster RCNN, YOLOv3 (You Only Look Once), and SSD (Single Shot Detector) are applied on SIXray, the larger X-ray dataset. Then the acquired best model is transferred to the relatively small and different dual X-ray baggage imagery dataset to detect knife threats, using the learned weights from the large X-ray dataset, and the effects of few shot learning and fine tuning is investigated. Furthermore, to observe the effects of the low energy and high energy images, the HUMS X-ray dataset is trained with false-colored, low energy, and high energy images. The dataset is publicly available with all the low energy, high energy and false-colored images *.