DIGITAL LIBRARY ARCHIVE
HOME > DIGITAL LIBRARY ARCHIVE
< Previous   List   Next >  
A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data
Full-text Download
Kilho Kim (SUALAB)
Sangwoo Choi (Recommendations, Coupang)
Moon-jung Chae (Department of Industrial Engineering and Institute for Industrial Systems Innovation, Seoul National University)
Heewoong Park (Department of Industrial Engineering and Institute for Industrial Systems Innovation, Seoul National University)
Jaehong Lee (kakaomobility datalab)
Jonghun Park (Department of Industrial Engineering and Institute for Industrial Systems Innovation, Seoul National University)
Vol. 25, No. 1, Page: 163 ~ 177
10.13088/jiis.2019.25.1.163
Keywords
human activity recognition, group interaction, smartphone multimodal sensors, convolutional neural network, long short-term memory recurrent network
Abstract
As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed.
First, a data preprocessing method which consists of time synchronization of multimodal data fromdifferent physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training.
An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.
Show/Hide Detailed Information in Korean
스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식
김길호 ((주)수아랩)
최상우 ((주)쿠팡 추천팀)
채문정 (서울대학교 산업공학과・서울대학교 산업시스템혁신연구소)
박희웅 (서울대학교 산업공학과・서울대학교 산업시스템혁신연구소)
이재홍 (카카오모빌리티 데이터랩)
박종헌 (서울대학교 산업공학과・서울대학교 산업시스템혁신연구소)
Keywords
사용자 행동 인식, 그룹 상호작용, 스마트폰 물리 센서, 컨볼루션 신경망, 장단기 기억 순환 신경망
Abstract
스마트폰이 널리 보급되고 현대인들의 생활 속에 깊이 자리 잡으면서, 스마트폰에서 수집된 다종 데이터를바탕으로 사용자 개인의 행동을 인식하고자 하는 연구가 활발히 진행되고 있다. 그러나 타인과의 상호작용 행동 인식에 대한 연구는 아직까지 상대적으로 미진하였다. 기존 상호작용 행동 인식 연구에서는 오디오, 블루투스, 와이파이 등의 데이터를 사용하였으나, 이들은 사용자 사생활 침해 가능성이 높으며 단시간 내에 충분한 양의 데이터를 수집하기 어렵다는 한계가 있다. 반면 가속도, 자기장, 자이로스코프 등의 물리 센서의 경우 사생활 침해 가능성이 낮으며 단시간 내에 충분한 양의 데이터를 수집할 수 있다. 본 연구에서는 이러한 점에 주목하여, 스마트폰 상의 다종 물리 센서 데이터만을 활용, 딥러닝 모델에 기반을 둔 사용자의 동행 상태 인식 방법론을 제안한다. 사용자의 동행 여부 및 대화 여부를 분류하는 동행 상태 분류 모델은 컨볼루션 신경망과 장단기기억 순환 신경망이 혼합된 구조를 지닌다. 먼저 스마트폰의 다종 물리 센서에서 수집한 데이터에 존재하는 타임 스태프의 차이를 상쇄하고, 정규화를 수행하여 시간에 따른 시퀀스 데이터 형태로 변환함으로써 동행 상태분류 모델의 입력 데이터를 생성한다. 이는 컨볼루션 신경망에 입력되며, 데이터의 시간적 국부 의존성이 반영된 요인 지도를 출력한다. 장단기 기억 순환 신경망은 요인 지도를 입력받아 시간에 따른 순차적 연관 관계를학습하며, 동행 상태 분류를 위한 요인을 추출하고 소프트맥스 분류기에서 이에 기반한 최종적인 분류를 수행한다. 자체 제작한 스마트폰 애플리케이션을 배포하여 실험 데이터를 수집하였으며, 이를 활용하여 제안한 방법론을 평가하였다. 최적의 파라미터를 설정하여 동행 상태 분류 모델을 학습하고 평가한 결과, 동행 여부와 대화 여부를 각각 98.74%, 98.83%의 높은 정확도로 분류하였다.
Cite this article
JIIS Style
Kim, K. ., S. Choi, M.-j. . Chae, H. . Park, J. . Lee, and J. . Park, "A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data", Journal of Intelligence and Information Systems, Vol. 25, No. 1 (2019), 163~177.

IEEE Style
Kilho Kim, Sangwoo Choi, Moon-jung Chae, Heewoong Park, Jaehong Lee, and Jonghun Park, "A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data", Journal of Intelligence and Information Systems, vol. 25, no. 1, pp. 163~177, 2019.

ACM Style
Kim, K. ., Choi, S., Chae, M.-j. ., Park, H. ., Lee, J. ., and Park, J. ., 2019. A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data. Journal of Intelligence and Information Systems. 25, 1, 163--177.
Export Formats : BiBTeX, EndNote

Warning: include(/home/hosting_users/ev_jiisonline/www/admin/archive/advancedSearch.php) [function.include]: failed to open stream: No such file or directory in /home/hosting_users/ev_jiisonline/www/archive/detail.php on line 429

Warning: include() [function.include]: Failed opening '/home/hosting_users/ev_jiisonline/www/admin/archive/advancedSearch.php' for inclusion (include_path='.:/usr/local/php/lib/php') in /home/hosting_users/ev_jiisonline/www/archive/detail.php on line 429
@article{Kim:JIIS:2019:766,
author = {Kim, Kilho and Choi, Sangwoo and Chae, Moon-jung and Park, Heewoong and Lee, Jaehong and Park, Jonghun },
title = {A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data},
journal = {Journal of Intelligence and Information Systems},
issue_date = {March 2019},
volume = {25},
number = {1},
month = Mar,
year = {2019},
issn = {2288-4866},
pages = {163--177},
url = {http://dx.doi.org/10.13088/jiis.2019.25.1.163 },
doi = {10.13088/jiis.2019.25.1.163},
publisher = {Korea Intelligent Information System Society},
address = {Seoul, Republic of Korea},
keywords = { human activity recognition, group interaction, smartphone multimodal sensors, convolutional neural network and long short-term memory recurrent network
},
}
%0 Journal Article
%1 766
%A Kilho Kim
%A Sangwoo Choi
%A Moon-jung Chae
%A Heewoong Park
%A Jaehong Lee
%A Jonghun Park
%T A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data
%J Journal of Intelligence and Information Systems
%@ 2288-4866
%V 25
%N 1
%P 163-177
%D 2019
%R 10.13088/jiis.2019.25.1.163
%I Korea Intelligent Information System Society