Brain-computer interfaces (BCIs) offer a direct link between the grey matter of our human brains and silicon and circuitry of computers. New technologies always bring with them new security threats, but with the human brain a single store of the most sensitive and private information it’s possible to imagine, the security stakes couldn’t be higher.
If we’re soon to be plugging computers directly into our brains, how can we protect that connection from those who want to attack them?
The first wave of brain-computer interfaces are beginning to make their way onto the market, offering users a way of keeping tabs on their stress levels, control apps, and monitor their emotions. BCI tech is also progressing outside the consumer area, with medical researchers using them to help those with spinal injuries to move paralysed limbs and restore a lost sense of touch.
SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)
Ultimately, BCIs could offer a way of communicating thoughts – a form of human-machine telepathy.
So why would someone want to hack a BCI?
Being able to read the thoughts or memories of a political leader, or a business executive, could be a huge coup for intelligence agencies trying to understand rival states, or for criminals looking to steal commercial secrets or for blackmail. There’s a military angle too; the US is already looking at BCIs as a way of controlling fleets of drones or cyber defences far more effectively than is now possible – being able to hack into those systems would create a huge advantage on the battlefield.
The consequences of an attack or data breach from a BCI could be an order of magnitude worse than other systems: leaked email logs are one thing, leaked thought logs are another. Similarly, the risks of ransomware become far greater if it’s targeted at BCIs rather than corporate systems; making it impossible to use a PC or a server is one thing; locking up the connection between someone’s brain and the wider world could be far worse.
BCIs could ultimately become an authentication mechanism in their own right: our patterns of brain activity are so unique they could ultimately be used as a way of permitting access to sensitive systems, which could make it worthwhile to try to copy them. “Attempts to trick such a biometric will likely be very difficult, because brainwaves are not visible (like other biometrics like a fingerprint, iris, etc.) and cannot be replicated by another person… without direct access to the person and their brain to record the person,” researchers at Israel’s Ben-Gurion University of the Negev wrote in a recent paper.
It’s early days, but there are already some signs that security will be a key consideration. For example, researchers have already shown that BCIs could be used to get people to disclose information from their PIN numbers to their religious convictions.
Some of the potential threats to BCIs will be carry-overs from other tech systems. Malware could cause problems with acquiring data from the brain, as well as sending signals from the device back to the cortex, either by altering or exfiltrating the data.
Man-in-the-middle attacks could also be recast for BCIs: attackers could either intercept the data being gathered from the headset and replace it with their own, or intercept the data being used to stimulate the user’s brain and replace it with an alternative. Hackers could use methods like these to get BCI users to inadvertently give up sensitive information, or gather enough data to mimic the neural activity needed to log into work or personal accounts.
Other threats to BCI security will be unique to brain-computer interfaces. Researchers have identified malicious external stimuli as one of the most potentially damaging attacks that could be used on BCIs: feeding in specially crafted stimuli to affect either the users or the BCI itself to try to get out certain information, showing users images to gather their reactions to them, for example. Other similar attacks could be carried out to hijack users’ BCI systems, by feeding in fake versions of the neural inputs causing them to take unintended actions – potentially turning BCIs into bots, for example.
Other attacks hinge on the introduction or removal of data from BCIs: introducing noise to diminish the signal-to-noise ratio, for example, and making the signal being received from the brain difficult or impossible to read. Similarly, attackers interfering with the noise cancellation of BCI systems – which separates the useful brain signals from the general background fuzz – could cause a denial of service: annoying if it’s an entertainment system that’s cracked, life-altering if it’s a BCI that allows someone to walk or control a wheelchair, for example.
SEE: Scientists are using brain-computer connections to restore a lost sense of touch
Currently, while we know something about the effect of normal BCI use on the brain, we don’t know how an attack on a BCI could, deliberately or inadvertently, damage the grey matter. A hijacked BCI causing disruption to the way a user’s brain works sounds like a sci-fi plot, but it could certainly be possible.
“What type of damage will [an attack] do to the brain, will it erase your skills or disrupt your skills? What are the consequences – would they come in the form of just new information put into the brain, or would it even go down to the level of damaging neurons that then leads to a rewiring process within the brain that then disrupts your thinking?” says Dr Sasitharan Balasubramaniam, director of research at the Waterford Institute of Technology’s Telecommunication Software and Systems Group (TSSG). “It’s not only at the information level, it could also be the physical damage as well,” he says.
Brains of BCI users will change and adapt as they learn to use the system, in the same way as they would to fresh experiences or acquiring new skills in the course of normal life. However, BCIs’ ability to cause neuroplasticity could bring with it a new level of risk. “BCIs have the potential to change the brain of the user (e.g. to facilitate motor or cognitive improvements to people with disabilities). To preserve the physical and mental integrity of the user, BCI systems need to ensure that no unauthorized person can modify their functioning,” Javier Mínguez, cofounder and CSO of neurotechnology company Bitbrain, tells ZDNet.
So how can you protect such systems, particularly given the information they hold and the potentially disastrous effects? While BCIs themselves may still be relatively novel, the technologies needed to secure them likely won’t be: anonymisers, security standards and protocols, antivirus, and encryption are all being suggested as means of staving off BCI attacks.
And, like any other technologies, brain-computer interfaces will need a multi-layered security approach to keep them safe, locking down each individual element of the BCI. “I don’t think that the countermeasures would be individual solutions. Going forward, we need to integrate so many different things, from how signals are wirelessly sent to the interface that might be just outside the head, all the way to integrating that with the machine learning for determining whether it’s the right or wrong pattern [a BCI is using], and then using that to actually deter the attacks,” TSSG’s Balasubramaniam says.
The level of risk in using BCIs also varies according to which type of system someone’s using: a headset-based, no-invasive system will get a low-quality signal and will be easy for a user to switch off and block external communication; an invasive system, meanwhile, gathers high-quality signals direct from the brain’s surface and requires surgery to disengage it fully.
“The more accurate and powerful a BCI is, the higher the risk could be,” says Mínguez. A more comprehensive measurement of the brain will potentially contain more sensitive information and, therefore, requires more strict safety standards, as do devices that modify brain function. “This is especially relevant, because the target users of these systems are generally a vulnerable population, including patients with certain neurological disorders,” he says.
SEE: Mind-controlled drones and robots: How thought-reading tech will change the face of warfare
What’s more, many of the standards and principles of good tech security and data hygiene used in other systems can be brought across for use in BCIs: educating users, gathering only the minimum amount of data necessary for the system to work, locking down when, how and who can access the system, and so on. However, while the technology side of the equation may have good security precedents elsewhere, the unknown unknowns of the human brain could prove BCIs’ greatest security challenge.
“In terms of the security of computational systems in general, this is a branch of science that is advanced enough and we probably have good enough understanding to know how to do the right thing from a technical perspective,” says Tamara Bonaci, affiliate faculty member at the National Science Foundation’s Center for Neurotechnology.
“What’s probably a little more interesting and likely much harder is the question of, do we know enough about the brain and about the human body and electrophysiological signals. Something that may not mean very much today might be recognised as something that is revealing sensitive information about the person tomorrow,” she warns.
However, the complexity of the human brain also brings good news for BCI security. Unlike other typically compromised systems like smartphones and tablets, BCIs aren’t one size fits all: they require a lot of training to make them compatible with their individual user.
“That signal on the surface looks pretty much like white noise. It’s very hard to discern any useful information there. You kind of have to zoom in on specific parts of the signal and know exactly what you’re looking for,” Bonaci says.