Cognitive-enhanced autonomous driving: EEG-trained drive-think from Tsinghua

The team at the Institute for Intelligent Industry (AIR) at Tsinghua University has unveiled a significant scientific advance in autonomous driving. The announcement came at NeurIPS 2025 and centers on a method that lets autopilot systems borrow the brain’s cognitive skills.

The researchers introduced an approach they call cognitive‑enhanced autonomous driving. It uses electroencephalogram (EEG) signals recorded from human drivers to train autopilot models to make decisions in a more human‑like way. Importantly, the method does not require EEG sensors in production cars, keeping system costs at current levels.

The training architecture, dubbed drive‑think, pairs onboard camera data with EEG during preparation to extract hidden cognitive responses to road situations. Using contrastive learning, the driving network then learns to recreate those responses when evaluating the scene.

Training unfolds in two stages. First, the system shapes cognitive skills from human brain data; second—during real‑world use—it relies only on standard video from the cameras. In effect, human driving experience is transferred to the machine‑vision model in an implicit form.

Tests on the nuScenes dataset and the Bench2Drive simulation platform showed clear gains: trajectory planning error fell, and collisions dropped by roughly 18–26%. In complex, high‑risk situations—such as abrupt cut‑ins—the system behaved more cautiously and predictably, closer to the way people drive. Taken together, the results point to a tangible safety gain.

The researchers said this is the first study to directly use human cognitive skills to improve end‑to‑end autonomous driving systems. The work opens new avenues for safer autopilots and for advancing physical intelligence guided by how the human brain operates—an intriguing bridge between neuroscience and machine perception that doesn’t add hardware burden.