Forget those futuristic interfaces controlled with waving hands. That’ll get old. Brainwaves are the ultimate control, and are, in fact, about the only channel open to severely disabled people.
Works on many machine learning models
Brain-computer interfaces (BCI), first developed in the 1980s, are widely used for the disabled, as well as research. The tech that enables the BCI to determine the intended letter is a type of machine learning classifier that is also used for photo identification, speech recognition, and malware identification. The attack outlined in the paper can affect those and others as well, which is scarier to me than the pure BCI hack.
But back to BCI. A common use of BCI for the disabled is to enable them to spell out words on a computer. But a recent paper (Tiny Noise Can Make an EEG-Based Brain-Computer Interface Speller Output Anything) has found that they’re easily hacked. Once you master the details.
Imagine you’re locked into your brain by a crippling disease like amyotrophic lateral sclerosis (Lou Gerhrig’s disease). You’ve lost control of your hands and voice. Your BCI speller is your only tool to tell anyone what you need.
Horror show
Your evil nephew comes into your hospital room, and before you know it your speller is altering your will and has you demanding a Do Not Resuscitate (DNR) order. If this isn’t a Stephen King story, it should be.
The researchers, based in China, Australia, and UC San Diego, have demonstrated that these BCIs and similar classifiers can be compromised by tiny adversarial perturbations. Too small to be detected when added to EEG-based BCIs, the perturbations can mislead the BCI to spell anything the attacker wants.
Spelling is a common BCI use case, and there are multiple types of EEG-based spellers, some dating back to the 1980s. One model is the SSVEP speller. An array of characters, each flickering at a specific frequency, generates brainwaves of the same frequency when the user focuses on one. The chosen character’s frequency can be determined from the EEG signals.
The user’s brain and the system can discriminate 40 characters with frequencies ranging from 8 Hz to 15.8 Hz in 0.2 Hz increments. Using a machine learning model the classifier takes the signal from the EEG and determines which character the user is selected. Classifiers now use machine learning to help identify the desired character.
How to hack a BCI
There’s quite a bit of math involved, but hey, that’s why we have computers. Taking the case of SSVEP system, start by understanding the frequency rates of the characters.
The researchers found that canonical correlation analysis (CCA) – used to extract the underlying correlation between two multi-channel time series – is the most effective method. Tack on a wad of statistical and geometric math to resolve the frequency map used by the classifier and voila! everything you need to construct an adversarial perturbation template. The adversarial template adds or substracts tiny levels of signal to achieve the adversary’s desired output.
The Storage Bits take
As with most of computing, making classifiers work faster has driven most research. Security is an afterthought.
Yet as more and more of our technology incorporates machine learning models, their security, or lack thereof, becomes critical. Imagine a malware that not only hides itself, as most already do, but also infects malware classifiers to blind them to any traces of its presence.
I’m sure bad guys is already working on it. I wonder which companies will be apologizing next year for their appalling lack of foresight?
As an analyst who’s tracked my predictions over several decades, I tend to overestimate the speed at which desired outcomes occur, and underestimate the speed at which less desirable outcomes develop. I hope I’m wrong, but I fear hacking machine learning models is a near term problem, if it hasn’t already begun.
Comments welcome, as always.