Thomas Cionek
45 Views

Explainable AI in Air Traffic Control: When Trust Depends on Experience and Mental Load

Explainable AI in Air Traffic Control: When Trust Depends on Expertise and Cognitive Load

The study by Cartocci et al. (2026) examines an increasingly critical issue in complex technological systems: how professionals in air traffic control respond to explainable artificial intelligence (XAI). The central question is how user expertise influences three key dimensions: mental workload, acceptance of AI systems, and intentions to use them.

In safety-critical environments such as air traffic control, technological accuracy alone is not sufficient. AI systems must also be interpretable and operationally meaningful for human decision-makers working under high temporal pressure and cognitive demand. The study therefore focuses on whether explainable AI can truly support human operators rather than simply adding another layer of complexity.


When Trust Depends on Expertise and Cognitive Load
When Trust Depends on Expertise and Cognitive Load

What the Study Shows

The research investigates how air traffic controllers with different levels of expertise interact with explainable AI systems designed to support operational decision-making.

The central findings highlight that:

  • Expertise significantly influences how AI explanations are interpreted and accepted

  • Explanations can reduce or increase perceived workload depending on the user’s experience

  • The intention to rely on AI recommendations depends strongly on the clarity and usefulness of the explanation

This reveals an important insight: explainability is not universally beneficial. A form of explanation that helps an experienced operator may overwhelm a novice, while simplified explanations may feel insufficient for experts.

In other words, effective explainable AI must be adaptive to the cognitive profile and expertise of the user.


A Decolonial Neuroscience Perspective

From a Decolonial Neuroscience perspective, this study challenges a common technocratic assumption: that adding AI automatically improves human decision-making.

Human decisions in complex environments are not purely computational processes. They involve attention regulation, situational awareness, prediction, memory, and responsibility under uncertainty. Technology therefore interacts with embodied cognition, not with abstract reasoning alone.

This aligns well with the concept of the Damasian Mind, in which cognition emerges from the integration of interoception, proprioception, and environmental perception (Damasio, 2018). In high-risk operational environments, decision-making depends on maintaining a stable integration between bodily regulation and cognitive evaluation.

Explainable AI becomes truly useful only when it supports this integration rather than competing with it.


Interpretative Avatars: Jiwasa and APUS

Two conceptual avatars help interpret the dynamics described in this study: Jiwasa and APUS.

Jiwasa represents synchronization between multiple intelligences within a shared task. Air traffic control is not the work of a single brain but a coordinated system involving multiple human operators and computational agents.

APUS, representing extended body-territory perception, also plays a role. Air traffic controllers must maintain a form of distributed situational awareness of the airspace, mentally tracking aircraft trajectories, distances, and potential conflicts.

From this perspective, explainable AI should not function as an isolated “decision oracle.” Instead, it should act as a coordinative extension of the operator’s cognitive territory.


Connections with Tensional Selves and Functional States

This study can also be interpreted through the concept of Tensional Selves, which describe functional states of cognition in relation to task demands.

Zone 1
Operational task mode. The professional sustains a functional self capable of monitoring, prioritizing, and responding to events in real time.

Zone 2
State of fluid cooperation between human and system. Explanations reduce cognitive friction and support situational clarity.

Zone 3
State of overload or mistrust. Poorly designed explanations may increase cognitive burden, disrupt attention, or reduce trust in the system.

The study highlights that explainability must be understood as cognitive workload regulation, not merely as algorithmic transparency.


DREX Citizen and Organic Policy

At first glance, the study focuses only on technological systems. However, it also illustrates a broader principle: human performance depends on metabolic and cognitive stability.

When individuals operate under constant uncertainty and overload, their ability to evaluate information and make flexible decisions deteriorates.

Within the concept of DREX Citizen, ensuring a minimal level of economic stability can be understood as metabolic support for the social body. Just as stable energy supply enables cellular function, social stability allows individuals to engage effectively with complex systems—including AI-mediated environments.


New Questions for BrainLatam

  1. Do physiological indicators such as EEG, HRV, respiration, or SpO₂ reveal when AI explanations reduce or increase cognitive load?

  2. Do expert and novice operators recruit different neural circuits when evaluating AI recommendations?

  3. Is there a threshold beyond which additional explanation becomes cognitive noise rather than support?

  4. Could adaptive explainability systems adjust explanation complexity based on physiological workload signals?

  5. Does explainable AI improve team synchronization in cooperative decision-making environments?


Possible Experimental Designs

Future BrainLatam research could combine EEG, HRV, eye-tracking, and behavioral performance in air-traffic-control simulators to examine how different explanation formats affect operators with varying levels of expertise.

Another promising direction would be to develop adaptive explanation systems, where AI adjusts the level of detail based on physiological indicators of mental workload.

A third approach would involve team-based studies, investigating whether explainable AI improves coordination and collective decision-making in complex operational environments.


BrainLatam Conclusion

The work of Cartocci and colleagues highlights a fundamental insight: good AI is not simply accurate AI—it is AI that resonates with human cognition in context.

In safety-critical domains such as air traffic control, explainability is not a luxury but a core requirement for trust, usability, and operational safety.

From a Decolonial Neuroscience perspective, this means recognizing that technological systems must respect embodied cognition, experience, and cognitive ecology. The goal of intelligent systems should not be to replace human decision-making but to synchronize with it.


Reference

Cartocci, G., Veyrié, A., Cavagnetto, N., Hurter, C., Degas, A., Ferreira, A., Ahmed, M. U., Begum, S., Barua, S., Inguscio, B. M. S., Ronca, V., Borghini, G., Di Flumeri, G., Babiloni, F., & Aricò, P. (2026).
Explainable artificial intelligence in air traffic control: Effects of expertise on workload, acceptance, and usage intentions.
Brain Informatics, 13(1). https://doi.org/10.1186/s40708-025-00287-6


IA explicable en el control del tráfico aéreo: cuando la confianza depende de la experiencia y de la carga cognitiva

Explainable AI in Air Traffic Control: When Trust Depends on Experience and Mental Load

IA explicável no controle de tráfego aéreo: quando a confiança depende da experiência e da carga mental

Cuando alfa se encuentra con teta: dinámica de frecuencia cruzada en la memoria de trabajo

When Alpha Meets Theta: Cross-Frequency Dynamics in Working Memory

Quando alfa encontra teta: a dinâmica cruzada que sustenta a memória de trabalho

Sueño profundo y cerebro infantil: el desacoplamiento sensorial depende del perfil sensorial del bebé

Deep Sleep and the Infant Brain: Sensory Decoupling Depends on an Infant’s Sensory Profile

Sono profundo e cérebro infantil: o desacoplamento sensorial depende do perfil sensorial do bebê

Dinámicas diferenciales del GABA en las redes cerebrales del autismo: una perspectiva sistémica sobre la regulación neural

GABA Dynamics Across Brain Networks in Autism: A Systems Perspective on Neural Regulation

GABA e a dinâmica das redes cerebrais no autismo: uma nova janela para entender a organização funcional do cérebro

Cuando imaginamos hablar, el cerebro mantiene el ritmo de la voz: el retraso motor–auditivo del ritmo Mu

When We Imagine Speaking, the Brain Still Follows the Rhythm of Speech: Motor–Auditory Delay of the Mu Rhythm

Quando imaginamos falar, o cérebro já dança no ritmo da fala: o atraso motor-auditivo do ritmo Mu

Cuando el sonido guía al cuerpo: sincronía sensoriomotora y los ritmos del movimiento humano

When Sound Guides the Body: Sensorimotor Synchrony and the Rhythms of Human Movement

Quando o som guia o corpo: sincronia sensório-motora e os ritmos do movimento humano

Cuando el córtex parietal pierde fuerza, la ambigüedad crece: una lectura decolonial sobre decisión, cuerpo e incertidumbre

When the Parietal Cortex Weakens, Ambiguity Grows: A Decolonial Reading of Decision, Body, and Uncertainty

Quando o córtex parietal perde força, a ambiguidade cresce: uma leitura decolonial sobre decisão, corpo e incerteza

¿Buscarse o perderse?

Seeking Yourself or Losing Yourself?

Buscar-se ou perder-se? Zona 1, 2 e 3 entre drogas, narrativas e estados alterados de consciência

Una Latinoamérica libre - Neurociencia decolonial con estudios de hiperescaneo NIRS-EEG

Free Latin America - Decolonial Neuroscience with NIRS–EEG Hyperscanning

NIRS–EEG Hyperscanning para 5 Participantes


NIRS–EEG Hyperscanning for 5 Participants
NIRS–EEG Hyperscanning for 5 Participants

#Decolonial
#Neuroscience
#ArtificialIntelligence
#HumanFactors
#Neuroergonomics
#CognitiveLoad
#Neuroscience
#AlphaWaves
#ThetaWaves
#BrainRhythms
#SensoryProcessing
#BabyBrain
#MuRhythm
#EmbodiedCognition
#DecisionMaking
#Consciousness
#Hyperscanning
#Multimodal
#EEG
#fNIRS
#CBDCdeVarejo
#PIX
#DREX
#DrexCidadão


#eegmicrostates #neurogliainteractions #eegmicrostates #eegnirsapplications #physiologyandbehavior #neurophilosophy #translationalneuroscience #bienestarwellnessbemestar #neuropolitics #sentienceconsciousness #metacognitionmindsetpremeditation #culturalneuroscience #agingmaturityinnocence #affectivecomputing #languageprocessing #humanking #fruición #wellbeing #neurophilosophy #neurorights #neuropolitics #neuroeconomics #neuromarketing #translationalneuroscience #religare #physiologyandbehavior #skill-implicit-learning #semiotics #encodingofwords #metacognitionmindsetpremeditation #affectivecomputing #meaning #semioticsofaction #mineraçãodedados #soberanianational #mercenáriosdamonetização
Author image

Jackson Cionek

New perspectives in translational control: from neurodegenerative diseases to glioblastoma | Brain States