How MIT's JETS AI reads Apple Watch signals to spot illness

Drawing on three million days of Apple Watch data, researchers from MIT and Empirical Health have built a new AI model that can flag illnesses with striking confidence. The system leans on the JEPA architecture proposed by Yann LeCun, which trains neural networks to infer the meaning of missing data rather than filling in blanks literally. For the fragmented, stop‑and‑start signals typical of wearables, that mindset feels like a natural fit.

The team analyzed records from 16,522 participants collected over several years, tracking 63 kinds of metrics tied to heart function, breathing, sleep, activity, and general health indicators. Only 15% of people had medical diagnoses, yet the model—called JETS—learned from the full dataset and was then fine-tuned on the labeled portion. The strategy lets the system pick up patterns from everyday life, not just clinical cases.

To adapt JEPA to time-series data, the researchers converted each observation into a token, applied masking, and trained the model to predict hidden representations. After training, they benchmarked JETS against several strong baseline architectures—and the results stood out. The model reached an AUROC of 86.8% for hypertension, 81% for chronic fatigue syndrome, and 86.8% for sinus node dysfunction.

While AUROC is not plain accuracy but a measure of how well a model separates likely cases from unlikely ones, JETS’ edge over classical algorithms comes through clearly. The authors emphasize that wearables still have vast untapped potential in medicine, and that newer architectures can extract value even from data once dismissed as too patchy or irregular.

In short, the study suggests that everyday gadgets like the Apple Watch can become a powerful early-warning system—if models are trained the right way, not demanding pristine inputs but learning to read the world through incomplete signals.