TY - GEN
T1 - A Novel Deep Convolutional Neural Network Architecture for Customer Counting in the Retail Environment
AU - Abed, Almustafa
AU - Akrout, Belhassen
AU - Amous, Ikram
N1 - Publisher Copyright:
© 2022, Springer Nature Switzerland AG.
PY - 2022
Y1 - 2022
N2 - Machine-learning and feature-based approaches have been developed in recent years to count shoppers in retail stores utilizing RGB-D sensors without occlusion in a top-view configuration. Since entering the era of large-scale media, deep learning approaches have become very popular and are used for a various variety of applications like the detection and identification of people in crowded scenes. Detecting and counting people is a difficult task especially in cluttered and crowded environments like malls, airports, and retail stores. Understanding the behavior of humans in a retail store is crucial for the efficient functioning of the business. We present an approach to segment and count people heads in a heavy occlusion environment by using a convolutional neural network. We present a novel semantic segmentation approach to detect people heads using top-view depth image data. The goal of our approach is to segment and count the human heads where the datasets are acquired by depth sensors (ASUS Xtion pro). For semantic segmentation, RGB images are used, but here in this case we are going to use depth images to segment human heads. The proposed architecture begins with ResNet50 as the pre-trained encoder and is then followed by the decoder network. The framework is assessed using the publicly available TVHeads Dataset, which contains depth images of people collected using an RGB-D sensor positioned in a top-view configuration. The results show good accuracy and prove that our approach is efficient and appropriate.
AB - Machine-learning and feature-based approaches have been developed in recent years to count shoppers in retail stores utilizing RGB-D sensors without occlusion in a top-view configuration. Since entering the era of large-scale media, deep learning approaches have become very popular and are used for a various variety of applications like the detection and identification of people in crowded scenes. Detecting and counting people is a difficult task especially in cluttered and crowded environments like malls, airports, and retail stores. Understanding the behavior of humans in a retail store is crucial for the efficient functioning of the business. We present an approach to segment and count people heads in a heavy occlusion environment by using a convolutional neural network. We present a novel semantic segmentation approach to detect people heads using top-view depth image data. The goal of our approach is to segment and count the human heads where the datasets are acquired by depth sensors (ASUS Xtion pro). For semantic segmentation, RGB images are used, but here in this case we are going to use depth images to segment human heads. The proposed architecture begins with ResNet50 as the pre-trained encoder and is then followed by the decoder network. The framework is assessed using the publicly available TVHeads Dataset, which contains depth images of people collected using an RGB-D sensor positioned in a top-view configuration. The results show good accuracy and prove that our approach is efficient and appropriate.
KW - Computer-vision
KW - Convolutional neural networks
KW - Deep-learning
KW - Intelligent retail environment
KW - People counting
UR - http://www.scopus.com/inward/record.url?scp=85133222695&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-08277-1_27
DO - 10.1007/978-3-031-08277-1_27
M3 - Conference contribution
AN - SCOPUS:85133222695
SN - 9783031082764
T3 - Communications in Computer and Information Science
SP - 327
EP - 340
BT - Intelligent Systems and Pattern Recognition - 2nd International Conference, ISPR 2022, Revised Selected Papers
A2 - Bennour, Akram
A2 - Ensari, Tolga
A2 - Kessentini, Yousri
A2 - Eom, Sean
PB - Springer Science and Business Media Deutschland GmbH
T2 - 2nd International Conference on Intelligent Systems and Pattern Recognition, ISPR 2022
Y2 - 24 March 2022 through 26 March 2022
ER -