Alternative Title

Cognition and Context-Aware Computing Towards a Situation-Aware System with a Case Study in Aviation

Abstract

In aviation, flight instructors seek to comprehend the intent and awareness of their students. With this awareness, derived from in-flight observation and post-flight examination, a human instructor can infer the internal contexts of their student aviators as they perform. It is this understanding that is fundamental for evaluating student development. Further, a well-understood construct for describing the state of knowledge about a dynamic environment is known as situational awareness (SA). Often pilot error is associated with SA [80], and it is fundamental to flight safety and mission execution. If these contexts can be automatically inferred, instructors and students can more easily determine when mastery of a particular skill has been achieved --- potentially improving the overall quality of student understanding. The goal of this research is the creation of a situation-aware inference system. A system that can evaluate the human-machine state, elucidating cognitive load, situational awareness, engagement, stress, gaze patterns, and the stimulus to attention and performance trade-offs.

In this work, we investigated the automated classification of cognitive load, leveraging the aviation domain as a surrogate for complex task workload induction. We used a mixed virtual and physical flight environment, given a suite of biometric sensors utilizing the HTC Vive Pro Eye and the E4 Empatica. We created and evaluated multiple models. And we have taken advantage of advancements in deep learning such as generative learning, multi-modal learning, multi-task learning, and X-Vector architectures to classify multiple tasks across 40 subjects inclusive of three subject types --- pilots, operators, and novices. Our cognitive load model can automate the evaluation of cognitive load agnostic to subject, subject type, and flight maneuver (task) with an accuracy of over 81\%. Further, this approach has been validated with real-flight data from five military test pilots collected over two test and evaluation flights on a C-17 aircraft.

Gaze tracking in the context of learning is a particularly exciting area of research as it can provide a better understanding of intent, situational awareness, and attention. We investigate how gaze tracking in a mixed virtual and physical flight environment can be employed to mimic the observations of a flight instructor. Our gaze classifier focuses on the classification of gaze scan patterns for aviators in a mixed-reality flight simulation. We created and evaluated two models: a task-agnostic model and a multi-task model. Both models use deep convolutional neural networks to classify the quality of pilot gaze scan patterns as compared to visual inspection by three experienced flight instructors. Our multi-task model can automate the process of gaze inspection for three separate flight tasks.

Our approach could assist existing flight instructors throughout the learning process, inform operational success and failures, or it may even open the door to more automated feedback for pilots by improving the execution of various phases of flight.

Degree Date

Summer 8-4-2020

Document Type

Dissertation

Degree Name

Ph.D.

Department

Computer Science and Engineering

Advisor

Suku Nair

Second Advisor

Eric Larson

Third Advisor

Jeff Tian

Fourth Advisor

Jennifer Dworak

Fifth Advisor

Frank Coyle

Sixth Advisor

Sandro Scielzo

Subject Area

Computer Science

Number of Pages

205

Format

.pdf

Creative Commons License

Creative Commons Attribution-Noncommercial 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial 4.0 License

Share

COinS