Reference Hub9
Using Naturalistic Vehicle-Based Data to Predict Distraction and Environmental Demand

Using Naturalistic Vehicle-Based Data to Predict Distraction and Environmental Demand

Dina Kanaan, Suzan Ayas, Birsen Donmez, Martina Risteska, Joyita Chakraborty
Copyright: © 2019 |Volume: 11 |Issue: 3 |Pages: 12
ISSN: 1942-390X|EISSN: 1942-3918|EISBN13: 9781522565666|DOI: 10.4018/IJMHCI.2019070104
Cite Article Cite Article

MLA

Kanaan, Dina, et al. "Using Naturalistic Vehicle-Based Data to Predict Distraction and Environmental Demand." IJMHCI vol.11, no.3 2019: pp.59-70. http://doi.org/10.4018/IJMHCI.2019070104

APA

Kanaan, D., Ayas, S., Donmez, B., Risteska, M., & Chakraborty, J. (2019). Using Naturalistic Vehicle-Based Data to Predict Distraction and Environmental Demand. International Journal of Mobile Human Computer Interaction (IJMHCI), 11(3), 59-70. http://doi.org/10.4018/IJMHCI.2019070104

Chicago

Kanaan, Dina, et al. "Using Naturalistic Vehicle-Based Data to Predict Distraction and Environmental Demand," International Journal of Mobile Human Computer Interaction (IJMHCI) 11, no.3: 59-70. http://doi.org/10.4018/IJMHCI.2019070104

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

This research utilized vehicle-based measures from a naturalistic driving dataset to detect distraction as indicated by long off-path glances (≥ 2 s) and whether the driver was engaged in a secondary (non-driving) task or not, as well as to estimate motor control difficulty associated with the driving environment (i.e. curvature and poor surface conditions). Advanced driver assistance systems can exploit such driver behavior models to better support the driver and improve safety. Given the temporal nature of vehicle-based measures, Hidden Markov Models (HMMs) were utilized; GPS speed and steering wheel position were used to classify the existence of off-path glances (yes vs. no) and secondary task engagement (yes vs. no); lateral (x-axis) and longitudinal (y-axis) acceleration were used to classify motor control difficulty (lower vs. higher). Best classification accuracies were achieved for identifying cases of long off-path glances and secondary task engagement with both accuracies of 77%.