More stories

  • in

    Quantum computers could soon reveal all of our secrets. The race is on to stop that happening

    A fully-fledged quantum computer that can be used to solve real-world problems. For many computer scientists, the arrival of such a device would be their version of the Moon landings: the final achievement after many decades of research — and the start of a new era.

    For companies, the development could unlock huge amounts of wealth, as business problems previously intractable for classical computers are resolved in minutes. For scientists in the lab, it could expedite research into the design of life-saving drugs.
    But for cryptographers, that same day will be a deadline — and a rather scary one. With the compute power that they will be capable of, large-scale quantum devices effectively pose an existential threat to the security protocols that currently protect most of our data, from private voice notes all the way to government secrets.
    SEE: Network security policy (TechRepublic Premium)
    The encryption methods that are used today to transform data into an unreadable mush for anyone but the intended recipients are essentially a huge maths problem. Classical computers aren’t capable of solving the equation in any useful time frame; add some quantum compute power, though, and all of this carefully encoded data could turn into crystal-clear, readable information.
    The heart of the problem is public key encryption — the protocol that’s used to encode a piece of data when it is sent from one person to another, in a way that only the person on the receiving end of the message can decode. In this system, each person has a private cryptography key as well as a public one, both of which are generated by the same algorithm and inextricably tied to each other.
    The publicly-available key can be used by any sender to encrypt the data they would like to transmit. Once the message has arrived, the owner of the key can then use their private key to decrypt the encoded information. The security of the system is based on the difficulty of figuring out a person’s private key based on their public one, because solving that problem involves factoring huge amounts of numbers.

    Inconveniently, if there’s one thing that quantum computers will be good at, it’s crunching numbers. Leveraging the quasi-supernatural behaviour of particles in their smallest state, quantum devices are expected to one day breeze through problems that would take current supercomputers years to resolve.
    That’s bad news for the security systems that rely on hitherto difficult mathematics. “The underlying security assumptions in classical public-key cryptography systems are not, in general, quantum-secure,” says Niraj Kumar, a researcher in secure communications from the school of informatics at the University of Edinburgh.
    “It has been shown, based on attacks to these keys, that if there is quantum access to these devices, then these systems no longer remain secure and they are broken.”

    Researchers have developed quantum algorithms that can, in theory, break public-key cryptography systems. 
    Image: IBM
    But as worrying as it sounds, explains Kumar, the idea that all of our data might be at risk from quantum attacks is still very much theoretical. Researchers have developed quantum algorithms, such as Shor’s algorithm, that can, in theory, break public-key cryptography systems. But they are subject to no small condition: that the algorithms operate in a quantum computer with a sufficient number of qubits, without falling to noise or decoherence.

    Innovation

    In other words, a quantum attack on public-key cryptography systems requires a powerful quantum computer, and such a device is not on any researcher’s near-term horizon. Companies involved in the field are currently sitting on computers of the order of less than 100 qubits; in comparison, recent studies have shown that it would take about 20 million qubits to break the algorithms behind public-key cryptography. 
    Kumar, like most researchers in the field, doesn’t expect a quantum device to reach a meaningful number of qubits within the next ten or 20 years. “The general consensus is that it is still very much a thing of the future,” he says. “We’re talking about it probably being decades away. So any classical public-key cryptography scheme used for secure message transmission is not under imminent threat.”
    NIST, the US National Institute of Standards and Technology, for its part estimates that the first quantum computer that could pose a threat to the algorithms that are currently used to produce encryption keys could be built by 2030. 
    Don’t let the timeline fool you, however: this is not a problem that can be relegated to future generations. A lot of today’s data will still need to be safe many years hence — the most obvious example being ultra-secret government communications, which will need to remain confidential for decades.
    The quest for quantum-safe
    This type of data needs to be protected now with protocols that will withstand quantum attacks when they become a reality. Governments around the world are already acting on the quantum imperative: in the UK, for example, the National Cyber Security Centre (NCSC) has accepted for several years now that it is necessary to end reliance on current cryptography protocols, and to begin the transition to what’s known as ‘quantum-safe cryptography’.
    Similarly, the US National Security Agency (NSA), which currently uses a set of algorithms called Suite B to protect top-secret information, noted in 2015 that it was time to start planning the transition towards quantum-resistant algorithms.
    As a direct result of the NSA’s announcement five years ago, a global research effort into new quantum-safe cryptography protocols started in 2016, largely led by NIST in the US. The goal? To make classical public-key cryptography too difficult a problem to solve, even for a quantum computer — an active research field now called ‘post-quantum cryptography’.
    NIST launched a call for help to the public, asking researchers to submit ideas for new algorithms that would be less susceptible to a quantum computer’s attack. Of the 69 submissions that the organization received at the time, a group of 15 was recently selected by NIST as showing the most promise.
    SEE: Security Awareness and Training policy (TechRepublic Premium)
    There are various mathematical approaches to post-quantum cryptography, which essentially consist of making the problem harder to crack at different points in the encryption and decryption processes. Some post-quantum algorithms are designed to safeguard the key agreement process, for example, while others ensure quantum-safe authentication thanks to digital signatures.
    The technologies comprise an exotic mix of methods — lattices, polynomials, hashes, isogenies, elliptic curves — but they share a similar goal: to build algorithms robust enough to be quantum-proof.
    The 15 algorithms selected by NIST this year are set to go through another round of review, after which the organisation hopes to standardise some of the proposals. Before 2024, NIST plans to have set up the core of the first post-quantum cryptography standards.
    NCSC in the UK and NSA in the US have both made it clear that they will start transitioning to post-quantum cryptography protocols as soon as such standards are in place. But government agencies are not the only organisations showing interest in the field. Vadim Lyubashevsky, from IBM Research’s security group, explains that many players in different industries are also patiently waiting for post-quantum cryptography standards to emerge. 
    “This is becoming a big thing, and I would say certainly that everyone in the relevant industries is aware of it,” says Lyubashevsky. “If you’re a car manufacturer, for example, you’re making plans now for a product that will be built in five years and will be on the road for the next ten years. You have to think 15 years ahead of time, so now you’re a bit concerned about what goes in your car.”

    For IBM’s Vadim Lyubashevsky, many players in different industries are patiently waiting for post-quantum cryptography standards to emerge.   
    Image: Cait Oppermann for IBM
    Any product that might still be in the market in the next couple of decades is likely to require protection against quantum attacks — think aeroplanes, autonomous vehicles and trains, but also nuclear plants, IoT devices, banking systems or critical telecommunications infrastructure.
    Businesses, in general, have remained quiet about their own efforts to develop post-quantum cryptography processes, but Lyubashevsky is positive that concern is mounting among those most likely to be affected. JP Morgan Chase, for example, recently joined research hub the Chicago Quantum Exchange, mentioning in the process that the bank’s research team is “actively working” in the area of post-quantum cryptography. 
    That is not to say that quantum-safe algorithms should be top-of-mind for every company that deals with potentially sensitive data. “What people are saying right now is that threat could be 20 years away,” says Lyubashevsky. “Some information, like my credit card data for example — I don’t really care if it becomes public in 20 years. There isn’t a burning rush to switch to post-quantum cryptography, which is why some people aren’t pressed to do so right now.”
    Of course, things might change quickly. Tech giants like IBM are publishing ambitious roadmaps to scale up their quantum-computing capabilities, and the quantum ecosystem is growing at pace. If milestones are achieved, predicts Lyubashevsky, the next few years might act as a wake-up call for decision makers. 
    SEE: Quantum computing: Photon startup lights up the future of computers and cryptography
    Consultancies like security company ISARA are already popping up to provide businesses with advice on the best course of action when it comes to post-quantum cryptography. In a more pessimistic perspective, however, Lyubashevsky points out that it might, in some cases, already be too late.
    “It’s a very negative point of view,” says the IBM researcher, “but in a way, you could argue we’ve already been hacked. Attackers could be intercepting all of our data and storing it all, waiting for a quantum computer to come along. We could’ve already been broken — the attacker just hasn’t used the data yet.”
    Lyubashevsky is far from the only expert to discuss this possibility, and the method even has a name: ‘harvest and decrypt’. The practice is essentially an espionage technique, and as such mostly concerns government secrets. Lyubashevsky, for one, is convinced that state-sponsored attackers are already harvesting confidential encrypted information about other nations, and sitting on it in anticipation of a future quantum computer that would crack the data open.
    For the researcher, there is no doubt that governments around the world are already preparing against harvest-and-decrypt attacks —  and as reassuring as it would be to think so, there’ll be no way to find out for at least the next ten years. One thing is for certain, however: the quantum revolution might deliver some nasty security surprises for unprepared businesses and organisations. More

  • in

    You're using your Android fingerprint reader all wrong

    Fingerprint readers offer a quick and convenient way to unlock Android smartphones.
    When it works.
    If you work with your hands, you may have noticed that the fingerprint reader can be somewhat unreliable, requiring several jabs from the meat nugget to work. It only takes an additional second or so, but it’s a speed bump when it comes to unlocking.
    So, what’s the problem?
    Must read: iOS 14.1 rolling out to iPhones, but no sign of a battery fix
    The problem is that fingerprints wear out and change as you work with your hands. It’s not enough of a change to allow you to get away with crimes, but the wear and tear and scuffs and scars can be enough to fool the scanner.

    Fingerprints on my index finger are worn and unreliable on my fingerprint readers
    If you work with your hands outdoors or as a technician, this will be an issue, but it’s also an issue for people with demanding hobbies such as rock climbing or CrossFit.

    So, short of getting a hand double to unlock your handset for your, what can you do?
    I’ve come across three workarounds to this problem.
    #1: Give Android the middle finger
    Literally.
    Use the fingerprints on your middle finger. Sure, it takes a little bit of getting used to, but I (along with others I’ve shared this trick with) have found that the prints on the middle finger takes less damage than other fingers.
    This is particularly handy for Android smartphones that have the fingerprint reader on the back.
    #2: Use the side of a finger (or thumb)
    Rather than using the tips of the fingers, use the sides, especially the thumb. Again, it’s a spot that takes less damage.
    This works well for smartphones with side-mounted fingerprint readers.
    #3: Game the system
    Another trick I find works well is to enroll the same finger with the fingerprint reader several times over a period of time. This way, it learns to read your fingerprint through the random scuffs and scars over a period of time.
    This trick is handy for those who don’t want to change the finger they use to unlock their smartphone. More

  • in

    Marriott fined £18.4 million by UK watchdog over customer data breach

    The Information Commissioner’s Office (ICO) has fined Marriott £18.4 million over a 2014 data breach, heavily reducing the penalty originally planned due to COVID-19 disruption.

    The Marriot hotel group was subject to a 2014 data breach impacting the Starwood resort chain, acquired by Marriott in 2015. 
    At the time, threat actors were able to infiltrate Starwood systems and execute malware via a web shell, including remote access tools and credential harvesting software. 
    The attackers were then able to enter databases used to store guest reservation data including names, email addresses, phone numbers, passport numbers, travel details, and loyalty program information. 
    The compromise continued until 2018, and over the course of four years, information belonging to roughly 339 million guests was stolen. In total, seven million records relating to UK guests were exposed.  
    See also: ICO fines profiteering UK firm for touting coronavirus products over spam texts
    The ICO says the company failed to meet the security standards required by GDPR due to failures to “put appropriate technical or organizational measures in place” when processing data, and as such, the company contravened data protection requirements now enforced through 2018 GDPR regulations. 

    However, the watchdog acknowledged that “Marriott acted promptly to contact customers and the ICO” once the cybersecurity incident was uncovered, and “acted quickly to mitigate the risk of damage suffered by customers.”
    The hotel chain, alongside rivals such as Hilton, has been forced to slash thousands of jobs as travel plans, business trips, and holidays were canceled due to the coronavirus pandemic. After posting its first quarterly loss in close to a decade, the company said it expects a cash burn of $85 million a month in 2020.
    Due to Marriott’s current struggles and with the company’s recent security improvements in mind, the ICO has still issued a fine — but one drastically cut from its originally-proposed penalty of over £99 million. 
    CNET: The best antivirus protection for Windows 10 in 2020
    The original notice of intent to fine, issued in July 2019, was set to £99,200,396 for GDPR violations. However, the ICO says that talks with Marriot, security improvements, and the economic damage caused by COVID-19 has led to the revised figure. 
    “Millions of people’s data was affected by Marriott’s failure; thousands contacted a helpline and others may have had to take action to protect their personal data because the company they trusted it with had not,” commented Elizabeth Denham, UK Information Commissioner. “When a business fails to look after customers’ data, the impact is not just a possible fine, what matters most is the public whose data they had a duty to protect.”
    Last month, British Airways was fined £20 million by the ICO after cyberattackers stole information belonging to over 400,000 customers in 2018. 
    TechRepublic: AWS releases Nitro Enclaves, making it easier to process highly sensitive data
    The data and privacy watchdog slammed the airline for “unacceptable” security failures leading to the data breach, including a lack of cybersecurity audits, lax access controls, and little use of two-factor authentication (2FA). 
    The fine is one of the highest the ICO has issued to date; however, it may have been far worse. The £20 million figure was calculated in consideration of BA’s “considerable” security improvements and the impact of the business caused by COVID-19. 
    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    CERT/CC launches Twitter bot to give security bugs random names

    In an attempt to reduce the use of sensationalized and scary vulnerability names, the CERT/CC team launched a Twitter bot that will assign random and neutral names to every security bug that receives a CVE identifier.

    Named Vulnonym, the bot is operated by the CERT Coordination Center (CERT/CC) at the Carnegie Mellon University, the first-ever CERT team created, and now a collaborator and partner of the DHS’ official US-CERT team.
    The idea for this bot came after the seemingly unending discussions around the topic “if vulnerabilities should have names?”
    The problem with vulnerability names
    For decades, all major security flaws have been assigned a CVE identifier by the MITRE Corporation. This ID is usually in the format of CVE-[YEAR]-[NUMBER], such as CVE-2019-0708.
    These CVE IDs are usually used by security software to identify bugs, track, and monitor bugs for statistical or reporting purposes, and CVE IDs are rarely used by humans in any meaningful way.
    Over the years, some security firms and security researchers realized that their work in identifying important bugs could easily get lost in a constant stream of CVE numbers that almost everyone has a hard time remembering.
    Companies and researchers realized that the bugs they discovered had more chances to stand out if the bug had a cool-sounding name.

    And so the practice of “bug naming” came to be, with the best-known examples being Spectre, Meltdown, Dirty Cow, Zerologon, Heartbleed, BlueKeep, BLESA, SIGRed, BLURTooth, DejaBlue, or Stagefright.
    But as time went by, some vulnerability names started to deviate from being descriptive of a security bug and entered the realm of fearmongering and attention-seeking, becoming a marketing shtick.
    Things reached a ridiculous level last year when a Cisco bug was named using three cat emojis under the spoken term of Thrangrycat (aka “three angry cats”).
    For the past years, many security experts have started to react with vitriol and derision every time a security bug is disclosed, and the bug has a name.
    Some major bugs have been played down just because the vulnerability received a name, while seemingly unexploitable bugs were overhyped and received way too much media attention just because they were launched with a name, website, logo, and sometimes even with theme songs.
    Yes, vulnerabilities should have names
    But in a blog post on Friday, the CERT/CC team decided to put forward a solution to put some order in vulnerability naming. Their answer was the Vulnonym bot, which will assign a two-word codename in the format of adjective-noun to every newly assigned CVE ID.
    “Not every named vulnerability is a severe vulnerability despite what some researchers want you to think,” said Leigh Metcalf, a member of the CERT/CC team.
    “We aren’t arguing that vulnerabilities shouldn’t have names, in fact, we are encouraging this process!”
    Metcalf argues that humans inherently need easy-to-remember terms to describe security bugs because “humans aren’t well conditioned to remember numbers,” such as the ones used for CVE IDs.
    She likened the situation to how domain names came to be, as humans are most likely to remember google.com instead of a four-digit IP address where the google.com website is hosted.
    “Our goal is to create neutral names that provides a means for people to remember vulnerabilities without implying how scary (or not scary) the particular vulnerability in question is,” Metcalf said. More

  • in

    Services Australia working on WPIT overhaul cyber concerns

    The Department of Human Services over five years ago kicked off the program of work to basically replace the then-30-year-old Income Security Integrated System (ISIS) that is used to distribute welfare to Australians.
    The project, known as the Welfare Payment Infrastructure Transformation (WPIT) program, was slated to cost around AU$1.5 billion and run from 2015 to 2022.
    The Australian National Audit Office (ANAO) last month handed down its examination of WPIT, finding the former department, now known as Services Australia, had “largely appropriate arrangements” in many areas, but was lacking on the cyber and cost monitoring fronts.
    Agency representatives told Senators last week that it was currently working on the recommendations made by ANAO.
    “We would agree with the ANAO report at that time that there were components of the system that have not been accredited, we have an approved program of work that is going through that accreditation program now,” Services Australia general manager cyber services Tim Spackman said.
    “I think it’s worth noting that there is a number of components to that system and even small changes require re-accreditation throughout that process — it’s not a set and forget scenario.”
    Spackman said the department has worked closely with the Australian Cyber Security Centre and that it has a “really good capability” in its 24/7 cyber operation centre.

    Specifically, Spackman said the department is currently looking at the ISIS component and has “done the lion’s share of that work”. He said completion is due before the year is out.
    “I would like to stress though, that the accreditation piece does not mean that nothing’s happening in the interim, we are continually looking at maturing our cyber capability,” he continued. “We just need to accept some of the mitigations and put that into a program of work.
    “The system is large and changes of that scale shouldn’t be done quickly or done in an ill-planned way, so it does take some time to ensure that we don’t disrupt services.”
    Providing an update of where WPIT is up to, deputy CEO for transformation projects Charles McHardie said Services Australia is currently in tranche four of the project.
    He said the agency has been funded to deliver across five key priorities in the final two years of the program. The first, he said, is reusable technology.
    “That is rolling out what we call a new payment utility capability, which allows us to replace the current payment capability that sits in the ISIS system, pushing the payment out to the Reserve Bank. So we’re replacing that,” he said.
    Services Australia released one payment through that program six weeks ago, the parenting allowance, and coming on Tuesday is scheduled to be pensions.
    “That’s been developed in what we call the SAP S4 HANA technology capability,” he said.
    The second one is the entitlement calculation engine, which McHardie has called the “heart of the ISIS system”.
    “The ISIS system has about 30 million lines of code, so quite complex, and around 4 million lines of that code base is related to entitlement calculations,” he said. 
    “This is basically where a customer submits a claim to us, tells us the circumstance of their situation, the system takes that circumstance, any information we already know about them in the core database that supports it, plus any additional information that’s been input by staff as part of that claim process, and comes up with an entitlement calculation based on social security legislation rules, which sit in that system.”
    He said based on that, the payment utility would make the payment through the Reserve Bank.
    The agency has outsourced this to systems integrator Infosys and is utilising technology from Pegasystems.
    “Over the period from now all the way through to the end of 2022, we will replace all of our entitlement calculations with that new capability,” he said. “So they’re what we call the two pieces of reusable tech.”
    It is expected Services Australia will use the technology for aged care reform and veteran-centric reform, too.
    The agency will then be implementing automation, claim transformation, circumstance updates, and a “data and enabling capability”.
    “The main thrust there is to replace all of the screens that our staff use when they process new claims, and when they deal with claim maintenance activity on a daily basis,” McHardie said.
    HERE’S MORE More

  • in

    Microsoft is mad as hell. This may make it worse

    More fuel for Redmond’s fiery plea for trust?
    You’ve probably had one or two thoughts about politics lately.

    It’s that time of year. The light begins to disappear, both outside your door and inside the eyes of tired, nonsense-peddling politicians.
    Perhaps this is what led Microsoft to fully express its own indignation at US politicians’ inability to do what more than 130 other countries have already managed — enact a digital privacy law or two.
    Last week, I offered the words of Julie Brill, Microsoft’s corporate vice-president for Global Privacy and Regulatory Affairs and chief privacy officer. (Her business card is 12 inches wide.)
    She expressed Redmond’s frustration that the US is so far behind in doing the right thing. She said: “In contrast to the role our country has traditionally played on global issues, the US is not leading, or even participating in, the discussion over common privacy norms.”
    Ultimately, however, Brill said the company’s research showed people want business to take responsibility, rather than government.
    Which some might think humorous, given how tech companies — Microsoft very much included — have treated privacy, and tech regulation in general, as the laughable burp of a constantly acquisitive society.

    I wondered, though, what other companies really thought about all this.
    In an attack of serendipity that I hope didn’t come from snooping around my laptop, new research asking those sorts of questions was just published.
    Snow Software, a self-described “technology intelligence platform” — you’re nothing if you’re not a platform — talked to 1,000 IT leaders and 3,000 employees from around the world.
    This was all in the cause of the company’s annual IT Priorities Report.
    I hope Brill and her team at Microsoft are sitting down as they read this. You see, 82% of employees said more regulation was needed. As did 94% of IT leaders. (The other 6% must be doing their jobs from a sandy beach, with a hefty supply of cocktails.)
    Yes, Microsoft, more people agree with you more strongly, yet still so little is being done. That won’t soothe your innards. It’ll drive you madder. Sometimes, having the majority on your side still doesn’t make you the winner.
    The majority of those surveyed who believed more regulation is necessary pointed to data protection and cybersecurity as the most urgent areas.
    In the US, though, IT leaders agreed that the most important area for correction was data protection, but next came data collection. They understand how the mining of our very souls has become entirely uncontrolled.
    These US IT leaders placed cybersecurity as third on their list of priorities, followed by universal connectivity and, how bracing that they mentioned this, competition.
    I asked Snow to dig deeper into its survey and offer me some unpublished details about its findings. One of the more difficult was that IT leaders said their priorities were adopting new technologies and reducing security risks. Yet the former can cause more of the latter, rather than less. How can you square the two?
    Naturally, there was something of a gulf between IT leaders and employees on one issue — technology that’s left unmanaged or unaccounted for.
    Far more employees think this is no biggie, whereas IT leaders would like to stand in front of these employees and scream for a very long time. While phrases such as “government fines” and “contractual breaches” emerged from their foamy mouths.
    Yet perhaps the most pungent and disspiriting result from this study is that a mere 13% of employees said tech regulations make them feel vulnerable. Last year, the number was 24%.
    You might think this good news. You’ll think it suggests security has somehow progressed enormously. 
    I’m not quite as optimistic. I worry employees are now so used to living inside technology that, in truth, they’ve entirely stopped thinking about the negative consequences of its insecurity. Whatever other answers they might give in surveys.
    Why, here’s an answer employees gave: A trifling 28% said current tech regulations made them they feel safe. That’s only 2 points higher than last year.
    Tech regulation isn’t easy. Tech companies have been allowed to swallow our lives whole and leave a complex indigestion for us to deal with. Too often, we don’t even bother trying because, well, it shouldn’t be our responsibility.
    These haven’t been responsible times. Tech has moved fast and broken things that really shouldn’t have been broken.
    The pieces on the floor are everyone’s. The responsibility for putting them back together lies, as Microsoft now confesses, with the High Humpties of government and business.
    I begin to hold my breath. More

  • in

    Chrome will soon have its own dedicated certificate root store

    Image: Christiaan Colen (Flickr)
    Google has announced plans to run its own certificate root program/store for Chrome, in a major architectural shift for the company’s web browser program.
    A “root program” or a “root store” is a list of root certificates that operating systems and applications use to verify the identity of a software program during its installation routine.
    Browsers like Chrome use root stores to check the validity of an HTTPS connection.
    They do this by looking at the website’s SSL certificate and checking if the root certificate that was used to generate the SSL cert is included in the local root program/store.
    Chrome will shift from OS root store to its own
    Since its launch in late 2009, Chrome was configured to use the “root store” of the underlying platform. For example, Chrome on Windows checked a site’s SSL certificate against the Microsoft Trusted Root Program, the root store that ships with Windows; Chrome on macOS relied on the Apple Root Certificate Program; and so on.
    But in a wiki page, shared with ZDNet by one of our readers, Google announced plans to create its own root store, named the Chrome Root Program, that will ship with all versions of Chrome, on all platforms, except iOS.
    The program is currently in its incipient stages, and there is no timeline of when Chrome will transition from using the OS root store to its own internal list.

    For now, Google maker has published rules for Certificate Authorities (CAs), the companies that issue SSL certificates for websites.
    The browser maker is urging CAs to read the rules and apply to be included in its new Chrome Root Program whitelist to ensure a seamless transition for Chrome users when the time comes.
    With a market share of 60% to 65%, Chrome is the gateway for most users to the internet, and most CAs will most likely have their affairs in order when the transition moment comes.
    Similar to Firefox
    This approach of packing the root store inside a browser rather than use the one provided by the underlying OS isn’t new and is what Mozilla has been doing for Firefox since its launch.
    Reasons to do so are many, starting with the ability for Chrome’s security team to intervene and ban misbehaving CAs faster, and Google’s desire to provide a consistent experience and common implementation across all platforms.
    However, the change was not met with open arms. One place where this move is expected to cause friction is in enterprise environments, where some companies like to keep an eye on what certificates are allowed in the root store of their devices.
    “This will generate more work for system administrators,” Bogdan Popovici, an IT administrator at a large software company in Iasi, Romania, told ZDNet. “We now have another root store list to manage, new group policies to set up, and a new changelog to follow. We’re already busy as it is.”
    “This is not an improvement! I need another root store to maintain like I need a hole in my head,” said Reddit user Alan Shutko. “It just makes it more difficult for companies that have their own CA to keep everything in sync.” More