When we think of pacemakers, insulin pumps, and other implanted medical devices (IMDs), what comes to mind is their benefit to users that rely on them to cope with various medical conditions or impairments.
Over time, IMDs have evolved to become more refined and smarter with the introduction of wireless connectivity — linking themselves to online platforms, the cloud, and mobile apps with connections made via Bluetooth for maintenance, updates, and monitoring, all in order to improve patient care.
However, the moment you introduce such a connection into a device, whether external or internal, this also creates a potential avenue for exploit.
The emerging problem of vulnerabilities and avenues for attack in IMDs was first highlighted by the 2017 case of St. Jude (now under the Abbott umbrella), in which the US Food and Drug Administration (FDA) issued a voluntary recall of 465,000 pacemakers due to vulnerabilities that could be remotely exploited to tamper with the life-saving equipment.
Naturally, these devices could not just be pulled out, sent in, and swapped for a new model. Instead, patients using the pacemakers could visit their doctor for a firmware update, if they so chose.
More recently, CyberMDX researchers estimated that 22% of all devices currently in use across hospitals are susceptible to BlueKeep, a Windows vulnerability in the Microsoft Remote Desktop Protocol (RDP) service. When it comes to connected medical devices, this figure rises to 45%.
According to Christopher Neal, CISO of Ramsay Health Care, many devices we use today are not built secure-by-design, and this is an issue likely to shadow medical equipment for decades to come.
At Black Hat USA on Wednesday, Dr. Alan Michaels, Director of the Electronic Systems Lab at the Hume Center for National Security and Technology at the Virginia Polytechnic Institute and State University, echoed the same sentiment.
Micheals outlined a whitepaper viewed by ZDNet and penned by the professor himself, alongside Zoe Chen, Paul O’Donnell, Eric Ottman, and Steven Trieu, that investigated how IMDs could compromise the security of secure spaces — such as those used by military, security, and government agencies.
Across the US, many agencies ban external mobile devices and Internet-connected products including smartphones and fitness trackers in compartmentalized, secure spaces on the grounds of national security.
If fitness trackers or smartphones are considered a risk, they can simply be handed in, locked away in a secure locker, and collected at the end of the day. However, IMDs — as they are implanted — are often overlooked or exempt entirely from these rules.
The professor estimates that over five million IMDs have been installed — approximately 100,000 of which belong to individuals with US government security clearance — and their value to users cannot be overlooked. This does not mean, however, that they may not pose a risk to security, and should their devices become compromised, users may unwittingly become insider threats.
“Given that these smart devices are increasingly connected by two-way communications protocols, have embedded memory, possess a number of mixed-modality transducers, and are trained to adapt to their environment and host with artificial intelligence (AI) algorithms, they represent significant concerns to the security of protected data, while also delivering increasing, and often medically necessary, benefits to their users,” Michaels says.
Pacemakers, insulin pumps, hearing implants, and other IMDs that are vulnerable to exploit could be weaponized to leak GPS and location data, as well as other potentially classified datasets or environmental information relating to the secure space, gathered from inbuilt sensors, microphones, and transducers that convert information from the environment into signals and data.
See also: Cybersecurity 101: Protect your privacy from hackers, spies, and the government
For example, there are smart hearing aids on the market that are linked to cloud architecture and use machine learning (ML) to record and analyze sounds for feedback and to improve its performance — but if compromised, this functionality could be hijacked.
GPS-based and passive data collection devices are considered low-risk, whereas gadgets using open source code, with cloud functionality, AI/ML, or voice activation are considered medium to high-risk.
When they are external and portable, medium to high-risk devices are generally banned from secure spaces, but many IMDs now also fall into these categories and have fallen through legislative cracks.
The issue is that IMDs are difficult, or impossible, to remove or disable while in a secure facility. It is not possible, either, to simply refuse access to secure spaces by IMD users as this would break discrimination laws.
CNET: The best home security camera of 2020
In addition, external mitigations have been proposed, including:
- Whitelisting: Pre-approving a set list of IMDs considered secure enough. However, this requires checks and consistency across different agencies.
- Random inspections: Devices previously approved would need to have their settings inspected — but policing this, in reality, would be difficult, especially as it may require access to proprietary vendor data.
- Ferromagnetic detection: Using detectors to identify implants or other foreign devices/IMDs before an individual enters a facility, to make sure they are on an approved list.
- Zeroization: Inspecting and clearing data from the device before it leaves the secure space could improve information security, but this would require safe ways to wipe information from life-saving devices — a daunting and potentially dangerous prospect.
- Physical signal attenuation: Becoming a walking Faraday cage to stop signals while in a security facility — such as by wearing a foil vest — has also been proposed, but as noted by Michaels, this is likely to be “cumbersome” in practice.
- Administrative software: Code could be developed to put IMDs in a form of “airplane” mode — but this will require investment, time, and testing by developers.
- Personal jamming: Wearers could enable a jammer to create enough noise to stop information being transmitted. However, this may impact battery life.
The team says that the advances made in the IMD field have “far outpaced” current security directives, creating a need for new policy considerations, and has called for amendments to Intelligence Community Policy Memorandum (ICPM) 2005-700-1, Annex D, Part I (.PDF) to include smart IMDs to remain compliant with Intelligence Community Policy Guidance (ICPG) 110.1 (.PDF).
Speaking to ZDNet, Michaels said that the simplest way to prevent IMDs from becoming a threat in secure facilities is to physically shield a device — and this is likely to be far safer in comparison to modifying firmware, as “that may create an untested operational state that (although very unlikely) could impact its intended operations or health of the user.”
TechRepublic: Security analysts: Industry has not solved the talent gap or provided clear career paths
The professor added that the security issues surrounding IMDs are likely to increase over time, and as they become more capable, security will become a balancing act between legislation, what vendors consider to be “privacy,” and battery consumption — one of the few elements constraining how far IMDs can go in terms of intelligent technologies.
“Moreover, I think that as the number of devices implanted increases, they become a more feasible target for malicious actors — given the expected lifetimes of many devices being 10+ years, the question almost becomes “how hard is it to hack a 10-year old IoT device,” Michaels commented. “Maybe not an immediate threat, but an increasing one over time, and very hard to enact a recall / firmware update.”
Previous and related coverage
Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0