More stories

  • in

    NSA warns against using DoH inside enterprise networks

    Image: ZDNet
    The US National Security Agency has published today a guide on the benefits and risks of encrypted DNS protocols, such as DNS-over-HTTPS (DoH), which have become widely used over the past two years.

    The US cybersecurity agency warns that while technologies like DoH can encrypt and hide user DNS queries from network observers, they also have downsides when used inside corporate networks.
    “DoH is not a panacea,” the NSA said in a security advisory [PDF] published today, claiming that the use of the protocol gives companies a false sense of security, echoing many of the arguments presented in a ZDNet feature on DoH in October 2019.
    The NSA said that DoH does not fully prevent threat actors from seeing a user’s traffic and that when deployed inside networks, it can be used to bypass many security tools that rely on sniffing classic (plaintext) DNS traffic to detect threats.
    Furthermore, the NSA argues that many of today’s DoH-capable DNS resolver servers are also externally hosted, outside of the company’s control and ability to audit.
    NSA: Use your own DoH resolvers, not from third-parties
    The NSA urges companies to avoid using encrypted DNS technologies inside their own networks, or at least use a DoH-capable DNS resolver server that is hosted internally and under their control.
    Moreover, the NSA argues that this same advice should also be applied to classic DNS servers, not just encrypted/DoH ones.

    “NSA recommends that an enterprise network’s DNS traffic, encrypted or not, be sent only to the designated enterprise DNS resolver,” the agency said.
    “This ensures proper use of essential enterprise security controls, facilitates access to local network resources, and protects internal network information.
    “All other DNS resolvers should be disabled and blocked,” the security agency said.
    CISA issued a similar warning last year
    But the NSA is not alone in its cry for caution about encrypted DNS, such as DoH, but also its counterpart, DoT (DNS-over-TLS).
    In April last year, the Cybersecurity and Infrastructure Security Agency also issued a directive asking all US federal agencies to disable DoH and DoT inside their networks due to security risks.
    CISA told agencies to wait until its engineers would be able to provide an official government-hosted DoH/DoT resolver, which would mitigate any threats of sending government DoH/DoT traffic to third-party DNS providers.
    The NSA advisory also comes after Iranian cyberspies have been seen using DoH to exfiltrate data from hacked networks without getting detected.
    Further, free tools released on GitHub have also made it trivial to hijack encrypted DoH connections to hide stolen data and bypass classic DNS-based defensive software. More

  • in

    Xayn introduces user-friendly and privacy-protecting web search

    I like the idea that users can take back control of their data in a variety of ways, and I really like the fact that my web search results are not being used to direct ultra-targeted ads toward me.
    Xayn
    I have been using DuckDuckGo for a while now, have used Presearch when I use Chrome as a browser, and Startpage is my search tab on my Edge browser.
    Recently I have been having a look at Germany-based tech startup Xayn’s app for my Android device.
    It is based on research in privacy-protecting AI and stands for transparency and ethical AI made in Europe.
    The app lets you have control over its search algorithms.
    By swiping left or right on the results, you can influence what results are displayed and can teach the algorithms which results you want to see more of in the future.
    Its AI model is a quantized tiny multilingual Sentence-BERT optimized for mobile to understand the natural language of queries of the words you used in your query as well as in the results.

    Then it uses an unsupervised clustering model to group these points into different clusters of interest — for example, sports or arts.
    It then calculates the distance to the clusters to reduce the computational cost in the future.
    A third model called ListNet analyzes the history of search interactions to understand which types of domain you like — for example, Wikipedia instead of Instagram
    Search companies want to know as much as possible about you so that they can control the results that are delivered to you.
    The problem is that to get the most accurate search results, users have to compromise their privacy. Xayn keeps its data completely with the user.
    With Xayn, users can customize features such as turning AI on to deliver unique search results and can turn it off if they do not want the feature.

    Xayn
    The app gives you a one-hand control and zero-click search to make it easy to use. Collect, store, and sort through your favorite web content so that you do not lose any information.
    Your home screen shows your own personal feed of the web, determined by your search history.
    Leif Nissen Lundbæk, co-founder and CEO at Xayn, said: 

    “I’ve always hated having to choose between privacy and convenience when searching online.
    I also found it creepy that I didn’t know why certain results were shown to me by the algorithms. Despite all that, I was still using established search giants, because I lost too much time finding what I was looking for with the privacy alternatives.”

    The only challenge I can see with Xayn is that users might swipe away all of the news articles that do not match their fundamental beliefs.
    There is a risk of the app displaying ever more right-wing or left-wing content over time. Users prefer to see the types of articles they like and form a ‘bubble of belief’ to validate their choice.
    A really clever AI will be able to solve this issue one day — but, for now, this neat customizable app will deliver exactly the results you want to see with the privacy you need. More

  • in

    Security software maker Tufin soars on raised Q4 outlook

    Shares of Tel Aviv-based security software maker Tufin surged by over 20% in late trading after the company this afternoon raised its outlook for the December-ending Q4, and named software industry veteran Ray Brancato as its chief revenue officer.
    Tufin said it now expects revenue in Q4 in a range of $30.5 million to $31.1 million, above a prior forecast offered in November of $24 million to $29 million. 
    It is also higher than the $26.8 million Wall Street has been expecting.
    Tufin’s products, such as SecureTrack, manage multiple different kinds of firewall and related devices, in on-premise and cloud deployments. 
    Also: SolarWinds: The more we learn, the worse it looks
    Brancato, who will report to Tufin CEO and co-founder Ruvi Kitov, was previously at AnyVision, based in the town of Holon in Israel, where he was also chief revenue officer. AnyVision specializes in artificial intelligence technologies for face, object and human recognition in mass crowd events, and claims to be “the world’s leading developer of facial, body and object recognition platforms.”
    Kitov said Brancato’s “deep experience directing global sales strategy and execution in the software industry will be a strong asset as we work to scale up significantly in the coming years.” 

    Kitov said higher forecast for the quarter just ended “continued to be driven by the accelerating trends of automation and the shift towards Zero-Trust.”
    Brancato will take over the responsibilities of outgoing senior vice president of sales Kevin Maloney, who has been with the company five and a half years. Tufin noted that company revenue tripled during his tenure to $100 million. 
    Kitov thanked Maloney for “his great contributions to Tufin in leading our sales organization to new heights during an important phase of our growth leading up to our IPO, and most recently leading sales through the challenging environment of the last 12 months.”
    Shares of Tufin are up 23% at $18.98 in late trading. Stocks of security software firms have been generally on an upward path as the massive SolarWinds attack in December caused Wall Street to raise its estimates for the entire industry’s potential sales. 

    Tech Earnings More

  • in

    Facebook sues two Chrome extension devs for scraping user data

    Image: Kon Karampelas
    Facebook filed a lawsuit today in Portugal against two Portuguese nationals for developing browser extensions that scraped user data from Facebook sites.
    “When people installed these extensions on their browsers, they were installing concealed code designed to scrape their information from the Facebook website, but also information from the users’ browsers unrelated to Facebook — all without their knowledge,” Jessica Romero, Facebook’s Director of Platform Enforcement and Litigation, said today.
    “If the user visited the Facebook website, the browser extensions were programmed to scrape their name, user ID, gender, relationship status, age group and other information related to their account,” Romero said.
    All extensions were developed by a software company named “Oink and Stuff,” specialized in creating Android apps and browser extensions for Chrome, Firefox, Opera, and Microsoft Edge.
    While the company develops a wide array of browser extension, Facebook said it found data collection-related malicious behavior inside four extensions named Web for Instagram plus DM, Blue Messenger, Emoji keyboard, and Green Messenger, which Facebook said “functioned like spyware.”
    All four extensions are still available on the official Chrome Web Store at the time of writing, and have more than 54,000 installs, combined.
    Facebook is now asking a Portuguese judge to issue a permanent injunction against the Oink and Stuff team and force the company to delete all the Facebook user data they acquired through the four extensions.

    A request for comment has been sent to Oink and Stuff but the company has not replied before this article’s publication due to timezone differences.
    Today’s lawsuit marks Facebook’s latest lawsuit against rogue app and extension developers. Since early 2019, Facebook’s legal department has been filing lawsuits against several third-parties that have been abusing its platform, such as:
    March 2019 – Facebook sues two Ukrainian browser extension makers (Gleb Sluchevsky and Andrey Gorbachov) for allegedly scraping user data.August 2019 – Facebook sues LionMobi and JediMobi, two Android app developers on allegations of advertising click fraud.October 2019 – Facebook sues Israeli surveillance vendor NSO Group for developing and selling a WhatsApp zero-day that was used in May 2019 to attack attorneys, journalists, human rights activists, political dissidents, diplomats, and government officials.December 2019 – Facebook sued ILikeAd and two Chinese nationals for using Facebook ads to trick users into downloading malware.February 2020 – Facebook sued OneAudience, an SDK maker that secretly collected data on Facebook users.March 2020 – Facebook sued Namecheap, one of the biggest domain name registrars on the internet, to unmask hackers who registered malicious domains through its service.April 2020 – Facebook sued LeadCloak for providing software to cloak deceptive ads related to COVID-19, pharmaceuticals, diet pills, and more.June 2020 – Facebook sued to unmask and take over 12 domains containing Facebook brands and used to scam Facebook users.June 2020 – Facebook sued MGP25 Cyberint Services, a company that operated an online website that sold Instagram likes and comments.June 2020 – Facebook sued the owner of Massroot8.com, a website that stole Facebook users’ passwords.August 2020 – Facebook sued MobiBurn, the maker of an advertising SDK accused of scraping user data.August 2020 – Facebook sued the owner of Nakrutka, a website that sold Instagram likes, comments, and followers.October 2020 – Facebook sued the maker of two Chrome extensions for scraping user data.November 2020 – Facebook sued a Turkish national for operating a network of at least 20 Instagram clones. More

  • in

    SolarWinds defense: How to stop similar attacks

    One of the most irritating things about the SolarWinds attack was that the Russian crack went unnoticed from March to December 2020. During that time, the Russian government’s SolarWinds hack was opening the door to the secrets of numerous top American government agencies and tech companies. Even now, we’re still trying to get our minds around just how widespread and bad the SolarWinds cracks were. 

    SolarWinds Updates

    The root causes of this crack were a dangerous set of software supply-chain failures. It’s too late for anything but damage control for SolarWinds, but The Linux Foundation has found several lessons to make sure your programs, whether open source or proprietary, avoid SolarWinds-style disasters.
    Also: Best VPNs • Best security keys
    David A. Wheeler, the Linux Foundation’s Director of Open Source Supply Chain Security, explained that in the Orion attack that the malicious code was inserted into Orion by subverting the program’s build environment. This is the process in which a program is compiled from source code to the binary executable program deployed by end-users. In this case, the security company CrowdStrike worked out that the Sunspot malware watched the build server for build commands and silently replaced some of Orion’s source code files with malware. 
    By entering the program before it’s even properly a program, this hack makes most conventional security advice useless. For example,  

    “Only install signed versions” doesn’t help because this software was signed.

    “Update your software to the latest version” doesn’t help because the updated software was the subverted one. 

    “Monitor software behavior” eventually detected the problem, but the attack was quite stealthy and was only detected after tremendous damage was done. 

    “Review source code” is not a certain defense either. In Orion’s case, it’s not even certain that developers could have spotted the source code changes. The changes were carefully written to look like the expected code. In addition, since the attackers had control of the build environment, they could have inserted the attack without it being visible to software developers.

    Finally, since Orion isn’t open-source software, no one could independently audit the code. Only the company’s developers could review it or its build system and configurations. As Wheeler said, “If we needed further evidence that obscurity of software source code doesn’t automatically provide security, this is it.”
    So, what can you do? Well using open-source is a good start. There’s nothing magical about open-source methodology. Security mistakes can still enter the code, but at least you have the possibility of more eyes looking for problems before they blow up. 

    In addition, Wheeler pointed out, companies must harden their build environments against attackers. For example, SolarWinds used extremely poor developer security practices. This included using the insecure ftp protocol for file transfers and publicly revealing passwords. Any build environment still using these is just asking for a security breach. 
    Also: Best Linux Foundation classes
    Build systems are critical production systems, and they should be treated like such. If anything, they should receive a higher level of security requirements than production environments. This is code security 101. 
    Even when you’ve secured your build system to the best of your abilities it’s not a sure thing that it’s safe. In the long run, Wheeler thinks there’s only one true strong countermeasure for this kind of attack: Verified reproducible builds. 
    “A reproducible build,” wrote Wheeler,  is one “that always produces the same outputs given the same inputs so that the build results can be verified. A verified reproducible build is a process where independent organizations produce a build from source code and verify that the built results come from the claimed source code.”
    That’s more of a good idea than something you can do today. Very few programs today can be verified. The Linux Foundation and Civil Infrastructure Platform has been funding work, including the Reproducible Builds project, to make verified reproducible builds.
    The Linux Foundation wants everyone to start implementing and requiring verified reproducible builds. Yes, this won’t be easy. Most software is not designed to be reproducible in their build environments. It may well take years to make software reproducible. 
    Wheeler wrote: 

    Many changes must be made to make software reproducible, so resources (time and money) are often needed. And there’s a lot of software that needs to be reproducible, including operating system packages and library level packages. There are package distribution systems that would need to be reviewed and likely modified. I would expect some of the most critical software to become reproducible first, and then less critical software would increase over time as pressure increases to make more software verified reproducible. It would be wise to develop widely-applicable standards and best practices for creating reproducible builds. Once software is reproducible, others will need to verify the build results for the given source code to counter these kinds of attacks. 
    Reproducible builds are much easier for open-source software (OSS) because there’s no legal impediment to having many verifiers. Closed source software developers will have added challenges; their business models often depend on hiding source code. It’s still possible to have “trusted rebuilders” worldwide to verify closed source software, even though it’s more challenging and the number of rebuilders would necessarily be smaller.
    The information technology industry is generally moving away from “black boxes” that cannot be inspected and verified and towards components that can be reviewed. So this is part of a general industry trend; it’s a trend that needs to be accelerated.

    Can’t happen? Why not? Auditors have access to the financial data and review the financial systems of most enterprises. Software companies need code and build auditors. Otherwise, we will certainly see more software build attacks spreading malware.
    Too expensive? Think again. SolarWinds is already being hit by its first class-action lawsuit. More will follow. The company’s stock has also seen a 40% drop since news of its failure broke. 
    Wheeler also reminded us that “Attackers will always take the easiest path, so we can’t ignore other attacks.”

    Specifically, since most attacks exploit unintentional vulnerabilities in code, we must prevent these unintentional vulnerabilities. These mitigations include changing tools and interfaces to avoid problems; detecting residual vulnerabilities before deployment; and educating developers on developing secure software. For example, there are free edX courses from Open Source Security Foundation (OpenSSF) on how to develop secure software.
    Wheeler also noted that since “Applications are mostly reused software (with a small amount of custom code), so this reused software’s software supply chain is critical. Reused components are often extremely out-of-date. Thus, they have many publicly-known unintentional vulnerabilities. In fact, reused components with known vulnerabilities are among the topmost common problems in web applications. The LF’s LFX security tools, GitHub’s Dependabot, GitLab’s dependency analyzers, and many other tools & services can help detect reused components with known vulnerabilities.”
    Again, simply using OSS doesn’t mean it’s safe. Wheeler added, “Vulnerabilities in widely-reused OSS can cause widespread problems, so the LF is working to identify such OSS so that it can be reviewed and hardened further.”
    Even if you’re using proprietary programs, the code behind it may be open source based. Synopsys has found that 99% of commercial software programs include at least one open-source component. That’s fine, but 91% of those included out of date or abandoned open-source code. That’s not good at all. 
    Malicious code can also hide in the supply chain. For instance, Wheeler stated, “most malicious code gets into applications through library ‘typosquatting.’ That is, by creating a malicious library with a name that looks like a legitimate library.” 
    For users, that means they should start asking for a software bill of materials (SBOM) so they will know exactly what it is they are using. Yes, that’s yet another argument for open source. Orion, like all proprietary software, is a black box. No one except its builders know what’s in it. And with Orion, it appears they never knew either until it blew up in their users’ faces. 
    The US National Telecommunications and Information Administration (NTIA) has been encouraging SBOM adoption. The Linux Foundation’s Software Package Data Exchange (SPDX) format is an SBOM format that’s being widely adopted. 
    Armed with SBOM information, you can examine what component versions are used in your program. Of course, this requires you to pay attention to what’s in your programs. For example, Equifax’s infamous failure was due to its simply not paying attention when a public fix was issued for the Apache Struts library it used in its programs. Equifax tried to pin the blame on Apache, but when the dust settled, Equifax admitted it was entirely at fault.
    When a program uses components with known vulnerabilities, that’s a big red flag, True, some vulnerabilities may not be exploitable, but, Wheeler states, far “too many application developers simply don’t update dependencies even when they are exploitable. To be fair, there’s a chicken-and-egg problem here: specifications are in the process of being updated, tools are in development, and many software producers aren’t ready to provide SBOMs. So users should not expect that most software producers will have SBOMs ready today. However, they do need to create a demand for SBOMs.”
    Wheeler also wants to see software distributors embracing SBOM information. “For many OSS projects this can typically be done, at least in part, by providing package management information that identifies their direct and indirect dependencies (e.g., in package.json, requirements.txt, Gemfile, Gemfile.lock, and similar files). Many tools can combine this information to create more complete SBOM information for larger systems.”
    Finally, Wheeler believes, “Organizations should invest in OpenChain conformance and require their suppliers to implement a process designed to improve trust in a supply chain.”  OpenChain is an open-source blockchain secured distributed ledger technology for tracking open-source program components and licenses. It’s also an ISO/IEC standard: 5230:2020.
    With all this in mind, you should put all the below on your software development and deployment list: 

    Harden software build environments

    Move towards verified reproducible builds 

    Change tools & interfaces so unintentional vulnerabilities are less likely

    Educate developers 

    Use vulnerability detection tools when developing software

    Use tools to detect known-vulnerable components when developing software

    Improve widely-used OSS 

    Ask for SBOMs in SPDX format. Many software producers aren’t ready to provide one yet, but creating the demand will speed progress

    Determine if subcomponents used have known vulnerabilities 

    Work towards providing SBOM information if we produce software for others

    Implement OpenChain 

    If you don’t, as Wheeler reminds us, “Those who do not learn from history are often doomed to repeat it.” Do you want your company to be the next SolarWinds? I don’t think so!
    Related Stories: More

  • in

    Cisco says it won't patch 74 security bugs in older RV routers that reached EOL

    Image: Cisco // Composition: ZDNet
    Networking equipment vendor Cisco said yesterday it was not going to release firmware updates to fix 74 vulnerabilities that had been reported in its line of RV routers, which had reached end-of-life (EOL).
    Affected devices include Cisco Small Business RV110W, RV130, RV130W, and RV215W systems, which can be used as both routers, firewalls, and VPNs.
    All four reached EOL in 2017 and 2018 and have also recently exited their last maintenance window as part of paid support contracts on December 1, 2020.
    In three security advisories posted yesterday [1, 2, 3], Cisco said that since December, it received bug reports for vulnerabilities ranging from simple denial of service issues that crashed devices to security flaws that could to used to gain access to root accounts and hijack routers.
    In total, the device maker said it received 74 bug reports but that it wouldn’t be releasing any software patches, mitigations, or workarounds as the devices had long reached EOL years before.
    Instead, the company advised that customers move operations to newer devices, such as the RV132W, RV160, or RV160W models, which provide the same features and which are still being actively supported.
    Some of the company’s customers might not like the company’s decision, but the good news is that none of the bugs disclosed today can be exploited easily.

    Cisco said that all the vulnerabilities require that an attacker have credentials for the device, which reduces the risk of having a network attacked in the coming weeks or months, giving administrators a chance to plan and prepare a migration plan to newer gear, or at least deploy their own countermeasures, otherwise.
    The CVE identifiers of the bugs Cisco declined to patch in its EOL routers are listed below:
    CVE-2021-1146
    CVE-2021-1147
    CVE-2021-1148
    CVE-2021-1149
    CVE-2021-1150
    CVE-2021-1151
    CVE-2021-1152
    CVE-2021-1153
    CVE-2021-1154
    CVE-2021-1155
    CVE-2021-1156
    CVE-2021-1157
    CVE-2021-1158
    CVE-2021-1159
    CVE-2021-1160
    CVE-2021-1161
    CVE-2021-1162
    CVE-2021-1163
    CVE-2021-1164
    CVE-2021-1165
    CVE-2021-1166
    CVE-2021-1167
    CVE-2021-1168
    CVE-2021-1169
    CVE-2021-1170
    CVE-2021-1171
    CVE-2021-1172
    CVE-2021-1173
    CVE-2021-1174
    CVE-2021-1175
    CVE-2021-1176
    CVE-2021-1177
    CVE-2021-1178
    CVE-2021-1179
    CVE-2021-1180
    CVE-2021-1181
    CVE-2021-1182
    CVE-2021-1183
    CVE-2021-1184
    CVE-2021-1185
    CVE-2021-1186
    CVE-2021-1187
    CVE-2021-1188
    CVE-2021-1189
    CVE-2021-1190
    CVE-2021-1191
    CVE-2021-1192
    CVE-2021-1193
    CVE-2021-1194
    CVE-2021-1195
    CVE-2021-1196
    CVE-2021-1197
    CVE-2021-1198
    CVE-2021-1199
    CVE-2021-1200
    CVE-2021-1201
    CVE-2021-1202
    CVE-2021-1203
    CVE-2021-1204
    CVE-2021-1205
    CVE-2021-1206
    CVE-2021-1207
    CVE-2021-1208
    CVE-2021-1209
    CVE-2021-1210
    CVE-2021-1211
    CVE-2021-1212
    CVE-2021-1213
    CVE-2021-1214
    CVE-2021-1215
    CVE-2021-1216
    CVE-2021-1217
    CVE-2021-1307
    CVE-2021-1360 More

  • in

    Switching to Signal? Turn on these settings now for greater privacy and security

    Many people are making the switch from WhatsApp to Signal. Many are switching because of the increased privacy and security that Signal offers.
    But with a few simple tweaks, did you know that you can make Signal even more secure?
    Must read: WhatsApp vs. Signal vs. Telegram vs. Facebook: What data do they have about you?
    There are a few settings I suggest you enable. There are some cosmetic differences between the iOS and Android versions of Signal, but these tips apply to both platforms.
    The first place you should head over to is the Settings screen. To get there, tap on your initials in the top-left corner of the screen (on Android you can also tap the three dots on the top-left and then Settings).
    There are three settings on iOS and four on Android I recommend turning on, and a few others worth taking a look at.
    Screen Lock (iOS and Android): Means you have to enter your biometrics (Face ID, Touch ID, fingerprint or passcode) to access the app
    Enable Screen Security (iOS) or Screen Security (Android): On the iPhone this prevents data previews being shown in the app switcher, while on Android it prevents screenshots being taken
    Registration Lock (iOS and Android):  Requires your PIN when registering with Signal (a handy way to prevent a second device being added)
    Incognito Keyboard (Android only): Prevents the keyboard from sending what you type to a third-party, which might allow sensitive data to leak
    While you’re here, Always Relay Calls, a feature which takes all your calls through a Signal server, thus hiding your IP address from the recipient, might be worth enabling, but it does degrade call quality.

    On top of this, I suggest that you tame notifications, especially if you are worried about shoulder surfers seeing your messages.
    To do this, head back to the main Settings screen and go to Notifications. For Show, change to No Name or Content for iOS and No name or message for Android.
    The iOS version of Signal also has a feature called Censorship Circumvention under Advanced, which is handy if you live in an area where there is active internet censorship happening which blocks Signal. If this does not apply to you, leave this off. More

  • in

    Apple removes feature that allowed its apps to bypass macOS firewalls and VPNs

    Image: Markus Spiske
    Apple has removed a controversial feature from the macOS operating system that allowed 53 of Apple’s own apps to bypass third-party firewalls, security tools, and VPN apps installed by users for their protection.
    Known as the ContentFilterExclusionList, the list was included in macOS 11, also known as Big Sur.
    The exclusion list included some of Apple’s biggest apps, like the App Store, Maps, and iCloud, and was physically located on disk at: /System/Library/Frameworks/NetworkExtension.framework/Versions/Current/Resources/Info.plist.

    Image: Simone Margaritelli
    Its presence was discovered last October by several security researchers and app makers who realized that their security tools weren’t able to filter or inspect traffic for some of Apple’s applications.
    Security researchers such as Patrick Wardle, and others, were quick to point out at the time that this exclusion risk was a security nightmare waiting to happen. They argued that malware could latch on to legitimate Apple apps included on the list and then bypass firewalls and security software.
    Besides security pros, the exclusion list was widely panned by privacy experts alike, since macOS users also risked exposing their real IP address and location when using Apple apps, as VPN products wouldn’t be able to mask users’ location.
    Apple said it was temporary
    Contacted for comment at the time, Apple told ZDNet the list was temporary but did not provide any details. An Apple software engineer later told ZDNet the list was the result of a series of bugs in Apple apps, rather than anything nefarious from the Cupertino-based company.

    The bugs were related to Apple deprecating network kernel extensions (NKEs) in Big Sur and introducing a new system called Network Extension Framework, and Apple engineers not having enough time to iron out all the bugs before the Big Sur launch last fall.
    But some of these bugs have been slowly fixed in the meantime, and, yesterday, with the release of macOS Big Sur 11.2 beta 2, Apple has felt it was safe to remove the ContentFilterExclusionList from the OS code (as spotted by Wardle earlier today).
    Once Big Sur 11.2 is released, all Apple apps will once again be subject to firewalls and security tools, and they’ll be compatible with VPN apps. More