More stories

  • in

    Leaving LastPass? Here's how to get your passwords out

    LastPass is changing its free offering, and some are looking for a new home for their passwords. But how do you get your passwords and other data out of LastPass?

    Here’s how.
    There are a few different ways to get your data out of LastPass, but the easiest, most reliable way I’ve found is to log into your account through a browser on a computer.
    You can then export a file of your data that’s CSV compatible, which most password applications and services will accept (this is a whole other topic, and I suggest you test things and take your time, because there’s always the risk of losing your password data).
    Here I’ll show you how to get your data out of LastPass.
    Step 1
    First, go to lastpass.com and log into your account.

    Log in to your LastPass account
    Step 2
    If you use two-factor authentication, you’ll need to enter those details.

    Deal with two-factor authentication
    Step 3

    You’re in. Now click on Advanced Option…

    You’re in!
    Step 4
    Click on Export.

    Export
    Step 5
    Re-enter your credentials.

    Re-enter your credentials
    Step 6
    There’s your data!

    There’s your data!
    Step 7
    Now you need to select this data, copy it, paste it into a text file and give it a .CSV extension. 
    I don’t recommend keeping all your passwords laying around unencrypted, so you either need to encrypt this file in the interim, so put it into whatever service you are going to use next. 
    Also, don’t kill your LastPass account until you are sure that your new service is set up and your passwords are accessible! Remember, these are your passwords!
    I can’t stress this step enough! I have heard from multiple people over the years who have gotten themselves into an enormous mess doing this. More

  • in

    Microsoft says SolarWinds hackers downloaded some Azure, Exchange, and Intune source code

    Image: Microsoft
    Microsoft’s security team said today it has formally completed its investigation into its SolarWinds-related breach and found no evidence that hackers abused its internal systems or official products to pivot and attack end-users and business customers.

    ZDNet Recommends

    The OS maker began investigating the breach in mid-December after it was discovered that Russian-linked hackers breached software vendor SolarWinds and inserted malware inside the Orion IT monitoring platform, a product that Microsoft had also deployed internally.
    In a blog post published on December 31, Microsoft said it discovered that hackers used the access they gained through the SolarWinds Orion app to pivot to Microsoft’s internal network, where they accessed the source code of several internal projects.
    “Our analysis shows the first viewing of a file in a source repository was in late November and ended when we secured the affected accounts,” the company said today, in its final report into the SolarWinds-related breach.
    Microsoft said that after cutting off the intruder’s access, the hackers continued to try to access Microsoft accounts throughout December and even up until early January 2021, weeks after the SolarWinds breach was disclosed, and even after Microsoft made it clear they were investigating the incident.
    “There was no case where all repositories related to any single product or service was accessed,” the company’s security team said today. “There was no access to the vast majority of source code.”
    Instead, the OS maker said intruders viewed “only a few individual files […] as a result of a repository search.”

    Microsoft said that based on the search queries attacker performed inside their code repositories, the intruders appeared to have been focused on locating secrets (aka access token) that they could be used to expand their access to other Microsoft systems.
    The Redmond company said these searches failed because of internal coding practices that prohibited developers from storing secrets inside source code.
    Some source code was also downloaded
    But beyond viewing files, the hackers also managed to download some code. However, Microsoft said the data was not extensive and that the intruders only downloaded the source code of a few components related to some of its cloud-based products.
    Per Microsoft, these repositories contained code for:
    a small subset of Azure components (subsets of service, security, identity)
    a small subset of Intune components
    a small subset of Exchange components
    All in all, the incident doesn’t appear to have damaged Microsoft’s products or have led to hackers gaining extensive access to user data.

    SolarWinds Updates More

  • in

    Ethernet: Why your home office could use more of it

    It’s been almost a year since many of us started working from home, and it doesn’t look like that’s going to change anytime soon. In previous Jason Squared shows, Jason Cipriani and I have talked about securing your home internet and even how to improve your Wi-Fi signal. However, there are alternative ways to improve connectivity throughout your home. Today, we will talk about one of the oldest — and perhaps still one of the best ways to connect your equipment to the internet — Ethernet.

    What the heck is Ethernet, exactly?
    Ethernet is a wired network communications standard developed in the early 1970s by a computer engineer named Bob Metcalfe (who, for many years, was also a well-known computer industry columnist at InfoWorld and also was responsible for forming 3COM, which HP later bought in 2010) and his team of researchers at Xerox’s Palo Alto Research Center.

    Category 5e Ethernet Cables
    Steve Heap, Getty Images/iStockphoto
    Over the years, Ethernet morphed from using coaxial cable to twisted pair cable and fiberoptic cables. The original standard called for network frames sent at 10Mbps. Today, it’s not uncommon for Ethernet to communicate at 1Gbps over twisted pair cable. Ethernet can move as fast as 40Gbps/100Gbps using fiberoptic cables on enterprise networks within data centers or in specialized environments. 
    Why do we want to use Ethernet at home?
    Chances are, you probably already do use at least some Ethernet at home. Most consumer broadband installations will have a residential gateway that incorporates Wi-Fi and some broadband access device, like a cable modem or an optical network terminal (ONT). Those will be connected with a short Cat-5 or Cat-6 Ethernet cable and the modular RJ-45 8-pin connector.
    There are many homes in which that’s likely the extent of their Ethernet install. But all home routers/residential gateways have at least one or more additional Ethernet ports on them, allowing you to expand that Ethernet network. So, for example, in my own home, with my AT&T ARRIS residential gateway (the main router), I have a few extra Ethernet ports. I have a 24-port Ethernet switch connected to one of these to add more Ethernet-connected devices.
    But you’re not stuck with the number of ports on your router. An Ethernet Switch is like the USB hubs you can buy for your PC or Mac. If you run out of Ethernet ports, you buy a switch, and it will give you more network interfaces.
    Why would I want to connect more devices to Ethernet rather than use Wi-Fi?
    There are a lot of reasons. Ethernet is super-reliable for starters. It is secure; it’s far more difficult for someone to sniff your network traffic if you use Ethernet, especially if you are using something like a VLAN. It’s also considerably faster than the network connectivity you will get in most home environments with Wi-Fi. Even with Wi-Fi 6, you will only get 450Mbps to 650Mbps speeds under optimal conditions; you will still get interference and latency. But with my 1Gbps fiber connection from AT&T, I frequently get over 900Mbps downloads, close to wireline speeds, when using a computer connected to the Ethernet switch.

    The other thing that’s good about Ethernet is it has pretty high distance limitations, like about 100 meters per run. So you can get the full speed out of that cable over that distance. This is good to have if you have a multi-story home, where you might have, say, an entertainment center in your basement or a bedroom on an upper floor that you want to have high-speed network connectivity. 
    Perhaps the Wi-Fi from the bottom floor or even your mesh network just isn’t cutting it because there are too many walls or whatever. You can bridge your network using a Wi-Fi access point using Ethernet, and a long cable runs to the switch or the router. You need to be able to drop that cable through a wall soffit, through the attic, or a crawlspace, or run it along the wall under the carpet to where it has to go. 
    In my case, my office is in the room next to where all my broadband equipment is, so I hired a handyman to install an Ethernet jack on both sides of the adjoining wall. But I know many people who have just drilled through the wall, bore a hole, and put an inexpensive plastic plate there or a grommet kit that is used for pushing cables through. You can get those at Home Depot.
    Is it expensive to build out your Ethernet network?

    An inexpensive 8-port Ethernet switch, made by Netgear.
    It doesn’t have to be expensive. I frequently see unmanaged desktop 16-port Gigabit Ethernet switches from Netgear, TP-LINK, and D-Link on Amazon for less than $60. You can buy pre-fabricated cables that are as long as 100 feet for about $22 from Best Buy or Amazon, and I have seen them as cheap as $12 at Walmart, too. But you can also crimp your own cables with a crimping tool and buy the twisted pair cable spools and the RJ-45 heads, and that’s not that expensive if you have to do your own wiring.
    Many streaming devices have Ethernet ports already built-in, such as the Roku, the Amazon Fire TV, the Apple TV, and gaming consoles like the Xbox and the Playstation. Network adapters for laptops are also not that expensive. We talked about hubs a few weeks ago; many on the market include Ethernet and HDMI and extra USB-C and USB-A ports for like $40, such as the one from Anker.
    What about the pricier Ethernet switches?
    The higher-end models are managed switches and are more expensive because they have special segmenting and security capabilities, such as for VLANs. These are typically for small and medium business use. But the other thing these more expensive switches can do is Power over Ethernet or PoE.
    In addition to carrying Ethernet communication, a Cat5-Cat6 twisted pair cable can also carry power. That means, if you wanted to place, say, a Wireless Access Point in some remote part of your house or in your small business where no AC power outlets exist, all you need to do is string the Cat5 cable to that location and plug in the device. This is useful for broadcasting a Wi-Fi signal to a wide-open area and mounting an access point on a ceiling. 
    For example, in my house, I have my main AP mounted high on a wall in my living room, and that signal can reach a large part of my house. It’s powered by a PoE switch connection in a spare bedroom where all my communications equipment is, including the broadband connection. To use PoE, in addition to a PoE compatible switch — and you can get them as cheaply as $80 for an 8-port version — you need to have a device that can be powered by PoE, such as a business-class access point. You can find these on Amazon for like $100 or less; Netgear has a Wi-Fi 6 one for $130. Som if you’re having a tough time with mesh networking routers — like I have — this is another way to get whole home or whole business Wi-Fi coverage.
    What if I can’t string Cat5 in my home?

    A pair of MoCA coaxial to Cat5 Ethernet transceivers, made by Actiontec.
    There are other ways of moving Ethernet. One way of doing that is MoCA, or Multimedia over Coax, which uses the Coaxial cable you might already have in your home from back in the Cable TV or the Satellite TV days. Many homes have to coax installed many years ago, but you can also coax outdoors and back into your home if needed, as it is a thick, shielded copper cable that is designed better to be protected from the elements. A MoCA adapter is a device connected in pairs, so you have one at one side of the coax cable you want to send Ethernet signal and one at the receiving end where you might put a switch or, say, an access point or something else. Actiontec, for example, sells these in pairs for $170. It advertises up to 2.5Gbps speeds over existing coaxial cable, which is extremely fast. Trendnet sells a similar product for $110, and you can also get that on Amazon. 
    What if you don’t have coax or don’t want to run new coax?

    A pair of HomePlug AV2 Powerline to Ethernet adapters, made by TP-LINK
    Finally, we get to something called Ethernet over Powerline, or HomePlug AV2, which is like the opposite of PoE; we are sending Ethernet signal over the AC power wires that are already inside your home. Again, this uses a pair of devices. One is plugged into the wall, and then Ethernet is cabled to your switch. Another one is plugged into the wall where you want the Ethernet signal transmitted to, and then there’s an Ethernet cable coming out of that, which plugs into whatever you want to plug it into. Using this method, it’s possible to have these adapters plugged into outlets all over your home, so your electrical system becomes one big network. 
    Now there are some gotchas to this: If your wiring is ancient and janky, this might not work. You also might not get good throughput out of this, either. However, it is theoretically possible to get up to a gigabit connection doing things this way. You can get the Homeplug AV2 adapters in pairs for about $70$80 on Amazon, and companies like Netgear, Trendnet, TP-Link, and D-Link make them. In my home, in the past, I’ve seen as high as 400Mbps per second when extending my living room’s entertainment center with this type of equipment — which is about par for the course with the fastest most people are going to see with 802.11ac Wi-Fi in optimal scenarios.

    ZDNet Recommends More

  • in

    RIPE NCC discloses failed brute-force attack on its SSO service

    RIPE NCC, the organization that manages and assigns IPv4 and IPv6 addresses for Europe, the Middle East, and the former Soviet space, has disclosed today a failed cyber-attack against its infrastructure.

    “Last weekend, RIPE NCC Access, our single sign-on (SSO) service was affected by what appears to be a deliberate ‘credential-stuffing’ attack, which caused some downtime,” the organization said in a message posted on its website earlier today.
    The agency said it mitigated the attack and found that no account was compromised but that an investigation is still underway.
    “If we do find that an account has been affected in the course of our investigations, we will contact the account holder individually to inform them.”
    Founded in 1992, RIPE NCC currently oversees the allocation of Internet number resources (IPv4 addresses, IPv6 addresses, and autonomous system numbers) to data centers, web hosting companies, telcos, and internet service providers in the EMEA region.
    A compromise of any RIPE NCC account would spell big problems for both RIPE and the account holders as it would allow intruders to re-assign, even if temporarily, internet resources to third-parties.
    IPv4 addresses are currently in very high demand all over the world, and a flourishing black market has formed over the past decade. This market is fueled by hijacked IPv4 address blocks, and its most frequent customers are malware gangs which use it to rent access to hijacked IPv4 address spaces so they can send spam and skirt spam blocklists.

    One of the most notorious IPv4 address space hijacks was discovered in 2019 when more than 4.1 million IPv4 addresses were transferred from South African companies to new owners, according to an AFRINIC investigation.
    RIPE NCC officially ran out of IPv4 addresses in November 2019, which explains why threat actors are now gunning for member accounts in the hopes of hijacking existing address pools.
    RIPE is now asking all its members, estimated at around 20,000 orgs, to enable two-factor authentication for their Access accounts to prevent intruders from gaining access to these resources through simple brute-force-like attacks. More

  • in

    CrowdStrike acquires Humio for $400 million

    Image: CrowdStrike
    Cyber-security powerhouse CrowdStrike announced today it acquired log management firm Humio for $400 million in a deal expected to close at the end of Q1 2021.

    The deal was confirmed in statements posted by both companies on their websites.
    Humio, a London-based startup that launched in 2016, provides products to simplify streaming, aggregating, and managing logs collected from large cloud-based enterprise networks.
    The company previously raised $32 million across two funding rounds and lists customers such as Bloomberg, HPE, Lunar, M1 Finance, Michigan State University, and SpareBank 1.
    CrowdStrike said it plans to integrate Humio’s expertise in log aggregation to upgrade its eXtended Detection and Response (XDR) offering.
    XDR products are a new breed of security products, considered an upgrade to classic EDR (Endpoint Detection and Response) solutions, and deployed in companies where internal networks also include many server and cloud-based products, for which classic EDR solutions aren’t typically equipped to monitor.
    “We conducted a thorough market review of existing solutions and were amazed by Humio’s mature technology architecture and proven ability to deliver at scale,” said George Kurtz, co-founder and chief executive officer of CrowdStrike.

    “The combination of real-time analytics and smart filtering built into CrowdStrike’s proprietary Threat Graph and Humio’s blazing-fast log management and index-free data ingestion dramatically accelerates our XDR capabilities beyond anything the market has seen to date.”
    A CrowdStrike spokesperson did not return a request for comment inquiring if Humio will continue to offer its log management services as a separate product beyond the CrowdStrike acquisition. More

  • in

    Weathering the storm: What public utilities can learn from cloud computing

    We think of cloud services as a creation of the modern digital world, but one of the first cloud services was installed almost 140 years ago, on Pearl Street in Manhattan, just south of Fulton Street.

    On September 4, 1882, the Pearl Street Station began providing light for 400 lamps and 82 customers. Like many cloud services, Pearl Street Station grew. Within two years, it was providing power for 500 customers, powering more than 10,000 lamps.
    Also: Top cloud providers in 2021: AWS, Microsoft Azure, and Google Cloud, hybrid, SaaS players
    Rather than each individual customer building power-generating infrastructure, they all relied on this one centralized service. This service was the Edison Illuminating Company, which would eventually become Con Ed (Consolidated Edison), the $12B public utility that today provides power to most of New York City and Westchester County.
    A quick update
    This article was originally written in 2016 and was centered around discussion of Hurricane Matthew. In 2017, my wife and I left Florida permanently when our home was once again at ground zero, this time for Hurricane Irma. That storm left our home without power for more than a week.
    We now live in Oregon, which is not without its own natural disasters. Last year, fellow Oregonians suffered terrible losses due to the 2020 wildfires. My family was spared, but many of our neighbors were not. And that’s not even considering the whole 2020 pandemic thing, which we’re all depressingly familiar with.
    Just this weekend, we lost power for what seemed like a very long time. Oregon had a “once in a generation” ice storm, dumping 1.25 inches of ice on everything. The normal ice load most infrastructure can handle is about 0.25 inches, and the added weight downed trees, powerlines, and communications everywhere. Oregon public utilities worked miracles, but even with hundreds or thousands of technicians a state like Oregon is pretty large.

    Texas is even larger and its big chill made things even worse for Texas residents, many of whom were completely unprepared for wintry weather. Texas, too, experienced a crippling power outage, and more than 4 million Texas residents lost power. By contrast, here in Oregon, barely an eighth of that amount were lighting candles.
    Also: Extreme weather forecast? Essential gear for when the power goes out
    The rest of this article will take you back to 2016 and is written from the perspective of someone who just lived through a hurricane. But the ideas presented about the power grid apply now, and apply to other natural disasters like wildfires and ice storms.
    The bottom line, of course, is we wish you the best. Hang in there.
    Shared characteristics
    Where I live, in Central Florida, we rely on another regulated public utility, Florida Power and Light (FPL). Both of these utilities share a lot of the characteristics we’re used to in IT cloud services.
    We’ve obviously devoted a lot of virtual ink to cloud computing, so I’m not going to rehash all the elements here. But it’s important to realize that both cloud computing providers and public utility providers are the keepers of the physical infrastructure. In the case of cloud computing, that’s servers, storage, and network. In the case of public utility providers, that’s power generation, power storage, and power distribution.
    On one hand, that’s great for consumers of these centralized resources. If you want to start your own online application business, you no longer need to build out a physical infrastructure. Back in the late 1990s (before we had real cloud access), I did that. Each time I reached a server’s max capabilities, scaling involved a major investment to jump to the next level. But with cloud, you just scale smoothly, with only incremental expense.
    Likewise, with power, I don’t have to build and maintain my own onsite generator, and figure out a way to manage and safeguard fuel deliveries. If I use a bit more, or a bit less, power each month, it’s simply reflected in my bill.
    Also: How to protect your IT power from deep-freeze disasters
    These services generally work reliably and consistently. I can check my Gmail, for example, without worrying about servers and infrastructure. I can brew a pot of coffee without worrying about whether or not the generator has been topped off with fuel.
    We, as a society, have come to rely on cloud and public utility services to such an extent that they actually define our civilization. When these services fail, our lifestyle stutters. For example, when access to Facebook or Gmail goes down, we suddenly feel disconnected from our friends and colleagues. Sometimes, we’re unable to complete work on time, or stay in touch for critical communications.
    When the power goes out, everything comes to a halt. There’s no air conditioning, no lights, no food preservation. Nothing.
    Blasted back to the Stone Age
    In most cases, failures are brief. They last a few hours, at most. But Hurricane Matthew blasted Central Florida back to the Stone Age. It wasn’t pretty.
    Those of us who live on the southeast coast of the United States knew Matthew (which was a Category 4 storm, based on the Saffir-Simpson Hurricane Wind Scale) was coming for almost a week. It was due to hit the Space Coast (where I live) on Friday morning.
    My wife and I spent Tuesday, Wednesday, and Thursday preparing the house. We disassembled the workshop, and turned it back into a reinforced garage to make room for one car. We moved our second car to the garage at my parents’ old house. We battened down all the hurricane shutters. We filled tubs and gallon jugs with water. We did our best to prepare.
    By midnight on Thursday, we’d mostly finished our preparations. Our nerves were on edge. To distract ourselves as the storm approached, we decided to binge-watch the latest season of Game of Thrones. Power dropped out for a few minutes during episode one, and another few minutes during episode two. We kept glancing at an app on my phone to watch the track of the hurricane’s eye.

    Read this

    Video: IT heroes of Hurricane Sandy
    ZDNet interviewed a panel of IT heroes who kept their organizations running During Hurricane Sandy with successful disaster recovery plans. Watch the full 40-minute panel discussion as these IT leaders share the lessons they learned.
    Read More

    At 4:43am, about 20 minutes into episode three of Game of Thrones, the power went off for the final time. The storm had arrived in full force.
    We were terrified, because all the track maps essentially showed the worst of the storm, including 140 mph winds, making landfall pretty much on top of us. Fortunately, that didn’t happen. The storm mostly missed us, with the eye about 20 miles offshore. Even so, we were hit by winds in excess of 70-90 miles per hour. By about 7am, the worst of the storm had passed.
    It was still unsafe to go outside, or even open up the internal windows behind the storm shutters. So we had no air flow in the house until about 3pm Friday. We tried to sleep. When the wind finally died down, and the limited LTE I still had on my iPhone showed that the storm had tracked northward, we removed the first of our fortifications (the shutters over our front door). We stepped outside into the fresh air.
    We were very fortunate. Our house sustained no damage. A neighbor’s fence had blown down. A home down the street had some roof damage. No one was hurt.
    But we had no power. On Friday, with the winds from the tail end of the storm still surging, at least we had a breeze. We thanked goodness that the supernatural heat of the summer was behind us. We opened our windows, and we lit some candles. The cross-breeze helped a little.
    Verizon’s LTE was barely functional, so getting information was nearly impossible. We had no idea when power would be restored. My iPhone was down to 50 percent. Normally, I’m very happy with the battery on the iPhone 6s Plus. But with no idea when I’d be able to recharge, I started to get nervous. We had some D-cell batteries for the fans, but, again, we didn’t know how long we’d be without power.
    All we could think about was how long we’d have to go without power, and how the hell we’d make it for however long it would be. When we finally regained some level of internet connectivity on the phone, the FPL status update site merely said that they were working hard to restore power. No time estimate was possible.
    On Saturday afternoon, power did come back on… for 20 seconds. The lights came on, and I almost teared up with relief. It was short-lived relief. All of a sudden there was a boom. The lights went back off. A nearby transformer had exploded, probably from debris across the connectors. It would be another day before we got power back again.
    All told, we were without power for three days. I’m ashamed to say that I didn’t take my forceable removal from civilization well. I was miserable, uncomfortable, desperate, and a little crazy. I couldn’t sleep. I didn’t eat much. There was nothing to do, nothing to work on, and — as a person who usually enjoys the illusion that I’m very much in control of my own destiny — nothing I could do to improve our situation.
    We simply had to wait
    This is the problem with centralized services like cloud services and public utilities. The convenience, scalability, cost-savings, reduced maintenance, and general reliability come at the cost of self-determination. If those services fail, they take you down with them.
    This is why, with cloud computing services, we often talk about redundancy, and keeping local backups. We can also employ a similar strategy with public utilities, although the implementation is much more complex, much more costly, and much less reliable.
    I do not own a generator. I regretted that a lot over this very long weekend. However, while there are relatively inexpensive generators available, one that can power A/C for a longer duration is very large and incredibly expensive. Worse, there’s the question of how to safely store the fuel during the storm, and whether the actual generator will survive the pounding of the storm.
    A similar concern exists for solar power. It would be great to put up solar cells and not have to pay the monthly power bill at all. But in a hurricane-prone area, solar cells are likely to be torn off the roof before they can provide the emergency power they’re intended for. It’s kind of a Catch-22.
    The real answer is that the public utilities, the power companies, need to implement more robust power distribution mechanisms.
    Okay, let me stop here for a moment. Before I criticize FPL and its ilk, I want to give a huge shout-out to all the very hardworking repair teams who restored our power over the weekend. I spoke to some of the guys working the lines, and they told me they’d been flown in from out of state before the storm. They worked their way up the state, restoring power county-by-county, city-by-city. They had had almost no sleep for days, while having to work with live power lines in 90-degree heat. They’re champions and heroes.
    Demand a better solution
    That said, this is not how it should work. All our power lines (and broadband lines, for that matter) are exposed and hanging. This is unconscionable. The power services know that we’re prone to hurricanes, yet they allow these lines to remain open and exposed.
    Image: David Gewirtz
    Worse, they’re often poorly maintained during non-emergency times. The picture you see to the right is the transformer behind my house. Notice all the overgrowth? If a branch crosses over the connectors, that transformer will either spark or explode. It’s already exploded once. And yet, that’s how FPL distributes power in an area prone to wind storms.
    Can you imagine such irresponsibility among cloud computing providers? It’s as if, knowing what they do about the prevalence of hackers and infiltrators, Google just didn’t bother using firewalls, intrusion prevention, or even password security. It’s as if Google’s entire cybersecurity strategy was “eh, call us when you’re hacked, and we’ll fix it when we get to it.”
    No one would tolerate such a thing. But that’s because Google has competition, which keeps it agile and competitive. We’re stuck with our single power provider, FPL, who has no competition. As such, they can choose to prioritize repairs using a system that’s essentially waiting to see what breaks, rather than building in any preventive infrastructure.
    There is no way, yet, to prevent these terrible storms. But the damage due to the storm is often the result of a failure in infrastructure planning, maintenance, or investment, not due to acts of Mother Nature. Katrina was a bad storm, to be sure. But it was the failure to maintain the levees protecting New Orleans that was the cause of most of the damage.
    Here in Brevard County, we have a little over 300,000 power customers. Friday night, more than 200,000 of them were without power. By Saturday night, 100,000 were still without power. And even today, after Friday, Saturday, Sunday, and now Monday, some of our friends are still waiting to have their power restored.
    It’s not that FPL didn’t have repair escalation plans in place, or dedicated workers. They did. Those folks are fantastic. They emergency response was well executed. But infrastructure as poorly maintained as the transformer in my back yard does not show an ongoing dedication to emergency prevention and disaster mitigation.
    We allow our public utilities to be monopolies because of the enormous investments required to deliver service to all customers. We regulate them because they’re monopolies. But we don’t do enough to demand that they harden and protect their infrastructure — and that’s because they don’t have any competition.
    If FPL had competition, the way Google has to compete against Microsoft and AWS, you can be sure we’d not only have spent this weekend in cool comfort, we’d probably spend a lot less each month on the services we do get.
    Perhaps as companies like Tesla (and even Apple) develop more robust battery technology, we can replace generators and solar cells with highly efficient in-structure batteries. Then, maybe we’ll be able to withstand five days without power from the grid, simply by tapping into our own private pool of battery power.
    Or, perhaps, similar to the way the internet has disrupted other forms of infrastructure, we’ll start to see new and innovative ways we can produce our own energy. Perhaps we will be able to replace the service we get from the grid or, at the very least, have an alternate source available for when storms like Matthew hit an entire region.
    You can follow my day-to-day project updates on social media. Be sure to follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV. More

  • in

    Private firms can't protect us from digital attacks. Government must step in.

    Unless you’ve been living under a rock, you know that our digital infrastructure is under attack. ZDNet’s excellent security coverage has daily updates, usually with names I’ve never heard of before. As the ZDNet security tagline says, “Let’s face it. Software has holes. And hackers love to exploit them. New vulnerabilities appear almost daily.” 

    ZDNet Recommends

    Sadly, that’s not hyperbole. “SolarWinds attack is not an outlier, but a moment of reckoning for security industry, says Microsoft exec” is a recent headline. 
    Vasu Jakkal, Microsoft’s corporate vice president of security, compliance and identity, said,

    “These attacks are going to continue to get more sophisticated. So we should expect that. This is not the first and not the last. This is not an outlier. This is going to be the norm. This is why what we do is more important than ever. I believe that SolarWinds is a moment of reckoning in the industry. This is not going to change and we have to do better as a defender community and we have to be unified in our responses.”

    But Ms. Jakkal is wrong. Private enterprise can’t handle serious, nation state digital aggression. Nations have the resources and patience to pursue long term strategies. Even the largest corporations lack the heft of a nation.
    Microsoft estimates that at least 1,000 engineers were needed to develop the SolarWinds hack. What company, what consortium of companies, could devote similar resources? 
    We don’t send defense contractors to fight wars. We send armed forces, backed by intelligence agencies and diplomacy – as well as the weapons defense contractors develop – to defeat the enemy.  
    Digital aggression is aggression
    Scale changes everything is a Silicon Valley truism. Back when the Internet’s predecessor, ARPAnet, was five nodes, there was no money in digital crime.

    Now the Internet is five billion nodes. Deep into the transition to a digital civilization, crime is following the money. The thieves, gangs, and nation-state bad actors are stealing everything that isn’t locked down. Money, industrial secrets, intelligence assets, and personal data.
    There’s no end in sight since “software engineering” is an oxymoron. As Randall Munroe had a software writer say on xkcd.com: “. . . our entire field is bad at what we do, and if you rely on us, everyone will die.” We don’t know how to build a digital dike that doesn’t leak. We can only plug holes after the bad guys find them.
    Strategically, deterrence seems to be the only option for persuading nation states to back off. And only a strong nation can persuade another nation to chill, as the Cold War showed. 

    Likewise, today’s Internet needs a police force as well. The Internet is borderless, so a global force is needed to bring the criminals to heel.
    Despite massive private investment in digital security, the stakes keep rising and the hacks are getting worse. Private enterprise isn’t working. Private efforts to coordinate across organizations to record and analyze attacks are not enough.
    Can the US government take this on?
    Don’t reflexively dismiss the idea that government could handle this. Consider the US armed forces, the world’s most powerful fighting force. Handsomely funded, well-trained, and constantly analyzing the threats America faces. That’s a blueprint for US Digital Defense Force.
    Perhaps you recoil at the thought of higher taxes to pay for the DDF. But the choice isn’t between no taxes and higher taxes. Criminals and nation-states – in Russia, they may be one and the same – are already collecting massive taxes to fund their aggression. The choice is essentially between paying for digital order and security, or paying the criminals.

    The take
    America’s adversaries are actively probing our infrastructure for vulnerabilities. America’s superiority in conventional forces – for now anyway – makes a big shooting war unlikely. But crippling America’s government, power, water, energy, and medical systems all at once would help even the odds if someone wanted to take us down.
    The current model of digital security isn’t working, nor is there a plan to fix it. Sorry Microsoft, you – and the rest of the private firms – don’t have the chops to take on Russia, Iran, and North Korea. 
    We’ve been here before. London in the early 1800s was a city of 1.3 million people with no central police force. In 1829 Parliament established the Metropolitan Police to bring order and security. Private firms and wealthy individuals had guards, but that was not enough.
    Like 1820s London, we need to be a well-funded and trained force to stop digital muggers, gangs, and conspiracies, whether private or nation sponsored. And our government to make it clear that countries that mess with our digital infrastructure will face painful consequences.
    Comments welcome. If you don’t like the government idea, what would you do instead? More

  • in

    SolarWinds attack hit 100 companies and took months of planning, says White House

    The White House team leading the investigation into the SolarWinds hack is worried that the breach of 100 US companies has the potential to make the initial compromise a headache in future.
    Anne Neuberger, deputy national security advisor for Cyber and Emerging Technology at the White House, said in a press briefing that nine government agencies were breached while many of the 100 private sector US organizations that were breached were technology companies. 

    More on privacy

    “Many of the private sector compromises are technology companies including networks of companies whose products could be used to launch additional intrusions,” said Neuberger, a former director of cybersecurity at the National Security Agency.
    SEE: Network security policy (TechRepublic Premium)
    Attackers that the US says are of “likely Russian origin” had compromised the software build system of US software vendor SolarWinds and planted the Sunburst backdoor in its widely used Orion product for monitoring enterprise networks.   
    That 100 private sector firms were breached in the attack paints a different picture to what was known in December, when Microsoft and FireEye, that were both breached, disclosed the attack. 
    At that stage there were eight federal agencies confirmed to have been breached, including the US Treasury Department, the Department of Homeland Security, the US Department of State, the US Department of Energy, and the National Nuclear Security Administration.   

    However, back then Microsoft and FireEye were the two most significant private sector companies known to have been compromised by the tainted Orion update (the Orion updates weren’t the only way that companies were infiltrated during the campaign, which also involved the hackers gaining access to cloud applications).
    “When there is a compromise of this scope and scale both across government and across the US technology sector to lead to follow-on intrusions, it is more than a single incident of espionage. It’s fundamentally of concern for the ability of this to become disruptive,” Neuberger explained during questioning. 

    ZDNet Recommends

    She stressed that the attackers were “advanced” because the “level of knowledge they showed about the technology and the way they compromised it truly was sophisticated.”
    “As a country we chose to have both privacy and security, so the intelligence community largely has no visibility into private sector networks. The hackers launched the hack from inside the United States, which further made it difficult for the US government to observe their activities,” she said.
    Microsoft president Brad Smith told 60 Minutes last week that it was “probably fair to say that this is the largest and most sophisticated attack the world has ever seen.”
    SEE: How do we stop cyber weapons from getting out of control?
    Smith previously said the attackers “used a technique that has put at risk the technology supply chain for the broader economy.”
    “We believe it took [the attackers] months to plan and execute this compromise. It’ll take us some time to uncover this, layer by layer,” said Neuberger.
    Neuberger said she expected the investigation, as well as identification and remediation of affected networks, would take months but not years to complete. 
    [embedded content] More