Aviation Security Should We Trust Automation

Technology advancements continue to break new barriers, allowing airports around the world to adopt innovative security systems, increase threat detection, streamline passenger throughput, and increase passenger satisfaction. These technologies center around artificial intelligence (AI) and automation. This article will discuss human cognitive limitations, how automation can hinder or help human performance, and why correctly applied automation is critical for the future of aviation security.

Current Applications of Automation in Aviation Security

Automation can be defined as the performance of tasks by machines that were previously carried out in some capacity by humans. Automation encompasses more than physical machines doing manual work, it is oftentimes algorithms computing large datasets. These algorithms are called Artificial Intelligence (AI) when trained to learn and correct their mistakes in a multi-faceted, dynamic environment.

Automation and AI are used in Advanced Imaging Technology (AIT), Computed Tomography (CT), Computer Based Training (CBT), Credential Authentication Technology (CAT), and many biometric technologies such as facial recognition, all of which are used in airports across the world. For each of these technologies, a computer algorithm assists a human operator to make a correct assessment. The computer algorithms access thousands of image characteristics contained in databases to statistically compute security threats.

The need for automation

Since the beginning of the pandemic in 2020, airline passengers are making a steady increase back to pre-covid levels and will continue to make record-breaking passenger numbers at the current trend. According to the Transportation Security Administration (TSA), 108,310 travelers were recorded on April 4, 2020 (the beginning of the pandemic) 1,561,959 travelers recorded on April 4th, 2021, and 2,139,084 travelers recorded on April 4th, 2022 (Figure 1).

Figure 1: TSA traveler numbers on April 4th for 2019, 2020, 2021, and 2022
Figure 1: TSA traveler numbers on April 4th for 2019, 2020, 2021, and 2022

As traveler numbers continue to increase, Transportation Security Officers (TSOs) are needed to check more people efficiently. This can be physically and mentally challenging. For this reason, there are industry-guided time limits for search-related tasks to maximize TSO performance. Automation and AI assist TSOs in search tasks at baggage checkpoints to identify security threats. Furthermore, automated AI can facilitate TSO alertness to keep their sensitivity from diminishing.

Humans have limited short-term memory capacity and can only manage so much cognitive workload before performance significantly drops. We can only hold 7 ± 2 bits of information in our short-term memory, also known as working memory (however, new data suggests that working memory has a storage capacity of 4 ± 1 bits when chunking is taken into account).

Similarly, Wicken’s Multiple Resource Theory stipulates that people have a limited pool of resources for mental processes. Different tasks draw from different resources (visual, auditory, verbal, and spatial). When two simultaneous tasks draw upon the same resource (i.e., listening to two different conversations simultaneously), performance suffers. However, when two simultaneous tasks draw upon different resources from an individual (i.e., listening to a conversation while riding a bike), then performance is not hindered.

Figure 2: Wicken’s Multiple Resource Theory
Figure 2: Wicken’s Multiple Resource Theory

Vigilance is an important human limitation to take into consideration. A vigilance task is a sustained, attention-driven task to search for predefined targets over a prolonged period of time. Many studies have shown that vigilance task performance decreases significantly over time. The current task cycle for TSOs is 30 minutes to mitigate any degradation in performance.

The point is that humans are limited in many ways compared to our AI counterparts. While we may struggle to simultaneously remember 2 new phone numbers, computers can remember and accurately manipulate millions of numbers concurrently for an unlimited amount of time. However, automation is not always perfect and the assignment of tasks to the system and the human need to be carefully assigned to ensure a productive work environment between the human and the system.

Automation Problems and How to Mitigate Them

Many models have been produced that depict different levels of automation. In 1978, Sheridan and Verplanck identified 10 levels of automation (Figure 3A). In 2000, Parasuraman, Sheridan, and Wickens identified 4 stages of automation (Figure 3B). The National Highway Traffic Safety Administration (NHTSA) provides a 5-level model of Automation (Figure 3C). And the Society of Automotive Engineers (SAE) provides a 6-level model of Automation (Figure 3D).

Figure 3: Models of automation: Sheridan and Verplanck (A), Parasuraman, Sheridan, and Wickens (B), NHTSA driving automation (C), and SAE Automation levels (D)
Figure 3: Models of automation: Sheridan and Verplanck (A), Parasuraman, Sheridan, and Wickens (B), NHTSA driving automation (C), and SAE Automation levels (D)

Each model suggests that there are multiple levels to take into consideration when deciding when and how to use automation. Automation can suggest alternative actions for the operator to choose from (low-level automation), it can act autonomously while ignoring the human (high-level automation), or it can fall somewhere in between. In short, decisions should be well thought out to determine the correct functional allocation for automation in a particular system (i.e., which tasks should be carried out by humans, which functions should be carried out by automation, and what level of automation is needed). If the level of automation is too high, critical decisions with severe consequences may be made without human approval. If automation is too low, operator workload may be too high and performance may decrease.

Operators that become complacent with automation or are not trained properly may experience Out of the Loop Unfamiliarity (OOTLUF). OOTLUF is the skill degradation in human performance as a result of complacency and loss of situation awareness. For example, a system with high automation is expected to make consistent rational decisions based on dynamic environmental data. If the operator is not actively involved, they may not know what or why the automated system is performing the way it is; when the human operator is then tasked to take control, accidents can happen.

A study by David Huegli, Sarah Merks, and Adrian Schwaninger in 2020 investigated automation reliability performance and operator compliance.122 airport screeners were tested in an X-ray simulation task with automation support. The automation reliability, accuracy, sensitivity, and positive predictive values were manipulated. Results found that automation provides the highest benefits when operator confidence was low and when unaided performance was low, but when operator confidence was high and unaided performance was high, there was little benefit for accurate and reliable automation. Furthermore, operator compliance (which entails actively processing automation alarms) was lower for automation with a higher false alarm rate and low positive predictive value (PPV) which resulted in half of the true automation alarms being ignored.

In other words, when automation is not reliable or accurate, operators don’t take the automation warnings seriously which results in a cry-wolf effect. If this is the case, the authors suggested that clear instructions should be given to the operators on how to respond to automated alarms; most automated systems will not be perfect, especially for automated systems with a liberal criterion that will sacrifice a high false alarm rate to have a higher hit rate for critical operations (Figure 4). Nonetheless, if automation is correctly implemented with proper training, it can provide impressive results when paired with a human.

Figure 4: Signal detection theory
Figure 4: Signal detection theory

Should We Trust in Automation?

To understand if we should trust automation, it is essential to know what facilitates trust in automation. Trust is an affective state of the operator and is typically assessed with subjective ratings. Automation reliability and automation complexity are key variables that influence trust (automation with high false alarm or miss rates, and automation responses that can’t be explained will decrease user trust). Over trust can also be a liability and is a result of poor detection of automation failures. When a user with over trust in automation detects the first failure of automation, the user’s trust is excessively revised.

There are several ways to address these issues and design for effective human-automation interaction. Information can be given to the operator to help them understand why the system is behaving the way it is (the operator needs to know what the automation is and isn’t capable of). Also, appropriate levels and stages of automation need to be identified and implemented (with the understanding that there is a tradeoff between workload reduction and a loss of situation awareness). This can be achieved through display design and training.

Like many complex things in life, the answer to the question ‘should we trust in automation’ is – it depends. First of all, trust is a subjective, affective state of the user. There are no right or wrong personal feelings we should have about automation. But there are ways to help individuals feel that automation can be trusted. If automation is appropriately designed with a correctly calibrated criterion threshold, a correct allocation of functions, and is transparent to the user, then people will generally trust automation. Alternatively, if automation takes the human out of the loop for important decisions, decreases the users’ skills, and is calibrated too liberally or conservatively, then people will generally distrust automation and the system can even be a liability in security environments.

 

Reference

 

Automated vehicles for safety. (n.d.). NHTSA. https://www.nhtsa.gov/technology-innovation/automated-vehicles-safety

Cowan N. (2010). The Magical Mystery Four: How is Working Memory Capacity Limited, and Why?. Current directions in psychological science, 19(1), 51–57. https://doi.org/10.1177/0963721409359277

Parasuraman R., Sheridan T. B., and Wickens C. D. (2000). “A model for types and levels of human interaction with automation,” in IEEE Transactions on Systems, Man, and Cybernetics – Part A: Systems and Humans, vol. 30, no. 3, pp. 286-297, doi: 10.1109/3468.844354.

SAE levels of driving automation refined for clarity and international audience. (n.d.). Society of Automotive Engineers. https://www.sae.org/blog/sae-j3016-update

Sheridan T. B., Verplank, W. L. 1978. Human and computer control of undersea teleoperators. Technical Report, Man-Machine Systems Laboratory, Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge. Mass.

TSA checkpoint travel numbers (current year versus prior year(s)/same weekday), transportation security administration. (n.d.). U.S. Department of Homeland Security Transportation Security Administration. https://www.tsa.gov/coronavirus/passenger-throughput

Wickens, C.D. (1984). Processing resources in attention. In R. Parasuraman & R. Davies (Eds.), Varieties of attention (pp. 63-101). New York: Academic Press.