More stories

  • in

    SolarWinds defense: How to stop similar attacks

    One of the most irritating things about the SolarWinds attack was that the Russian crack went unnoticed from March to December 2020. During that time, the Russian government’s SolarWinds hack was opening the door to the secrets of numerous top American government agencies and tech companies. Even now, we’re still trying to get our minds around just how widespread and bad the SolarWinds cracks were. 

    SolarWinds Updates

    The root causes of this crack were a dangerous set of software supply-chain failures. It’s too late for anything but damage control for SolarWinds, but The Linux Foundation has found several lessons to make sure your programs, whether open source or proprietary, avoid SolarWinds-style disasters.
    Also: Best VPNs • Best security keys
    David A. Wheeler, the Linux Foundation’s Director of Open Source Supply Chain Security, explained that in the Orion attack that the malicious code was inserted into Orion by subverting the program’s build environment. This is the process in which a program is compiled from source code to the binary executable program deployed by end-users. In this case, the security company CrowdStrike worked out that the Sunspot malware watched the build server for build commands and silently replaced some of Orion’s source code files with malware. 
    By entering the program before it’s even properly a program, this hack makes most conventional security advice useless. For example,  

    “Only install signed versions” doesn’t help because this software was signed.

    “Update your software to the latest version” doesn’t help because the updated software was the subverted one. 

    “Monitor software behavior” eventually detected the problem, but the attack was quite stealthy and was only detected after tremendous damage was done. 

    “Review source code” is not a certain defense either. In Orion’s case, it’s not even certain that developers could have spotted the source code changes. The changes were carefully written to look like the expected code. In addition, since the attackers had control of the build environment, they could have inserted the attack without it being visible to software developers.

    Finally, since Orion isn’t open-source software, no one could independently audit the code. Only the company’s developers could review it or its build system and configurations. As Wheeler said, “If we needed further evidence that obscurity of software source code doesn’t automatically provide security, this is it.”
    So, what can you do? Well using open-source is a good start. There’s nothing magical about open-source methodology. Security mistakes can still enter the code, but at least you have the possibility of more eyes looking for problems before they blow up. 

    In addition, Wheeler pointed out, companies must harden their build environments against attackers. For example, SolarWinds used extremely poor developer security practices. This included using the insecure ftp protocol for file transfers and publicly revealing passwords. Any build environment still using these is just asking for a security breach. 
    Also: Best Linux Foundation classes
    Build systems are critical production systems, and they should be treated like such. If anything, they should receive a higher level of security requirements than production environments. This is code security 101. 
    Even when you’ve secured your build system to the best of your abilities it’s not a sure thing that it’s safe. In the long run, Wheeler thinks there’s only one true strong countermeasure for this kind of attack: Verified reproducible builds. 
    “A reproducible build,” wrote Wheeler,  is one “that always produces the same outputs given the same inputs so that the build results can be verified. A verified reproducible build is a process where independent organizations produce a build from source code and verify that the built results come from the claimed source code.”
    That’s more of a good idea than something you can do today. Very few programs today can be verified. The Linux Foundation and Civil Infrastructure Platform has been funding work, including the Reproducible Builds project, to make verified reproducible builds.
    The Linux Foundation wants everyone to start implementing and requiring verified reproducible builds. Yes, this won’t be easy. Most software is not designed to be reproducible in their build environments. It may well take years to make software reproducible. 
    Wheeler wrote: 

    Many changes must be made to make software reproducible, so resources (time and money) are often needed. And there’s a lot of software that needs to be reproducible, including operating system packages and library level packages. There are package distribution systems that would need to be reviewed and likely modified. I would expect some of the most critical software to become reproducible first, and then less critical software would increase over time as pressure increases to make more software verified reproducible. It would be wise to develop widely-applicable standards and best practices for creating reproducible builds. Once software is reproducible, others will need to verify the build results for the given source code to counter these kinds of attacks. 
    Reproducible builds are much easier for open-source software (OSS) because there’s no legal impediment to having many verifiers. Closed source software developers will have added challenges; their business models often depend on hiding source code. It’s still possible to have “trusted rebuilders” worldwide to verify closed source software, even though it’s more challenging and the number of rebuilders would necessarily be smaller.
    The information technology industry is generally moving away from “black boxes” that cannot be inspected and verified and towards components that can be reviewed. So this is part of a general industry trend; it’s a trend that needs to be accelerated.

    Can’t happen? Why not? Auditors have access to the financial data and review the financial systems of most enterprises. Software companies need code and build auditors. Otherwise, we will certainly see more software build attacks spreading malware.
    Too expensive? Think again. SolarWinds is already being hit by its first class-action lawsuit. More will follow. The company’s stock has also seen a 40% drop since news of its failure broke. 
    Wheeler also reminded us that “Attackers will always take the easiest path, so we can’t ignore other attacks.”

    Specifically, since most attacks exploit unintentional vulnerabilities in code, we must prevent these unintentional vulnerabilities. These mitigations include changing tools and interfaces to avoid problems; detecting residual vulnerabilities before deployment; and educating developers on developing secure software. For example, there are free edX courses from Open Source Security Foundation (OpenSSF) on how to develop secure software.
    Wheeler also noted that since “Applications are mostly reused software (with a small amount of custom code), so this reused software’s software supply chain is critical. Reused components are often extremely out-of-date. Thus, they have many publicly-known unintentional vulnerabilities. In fact, reused components with known vulnerabilities are among the topmost common problems in web applications. The LF’s LFX security tools, GitHub’s Dependabot, GitLab’s dependency analyzers, and many other tools & services can help detect reused components with known vulnerabilities.”
    Again, simply using OSS doesn’t mean it’s safe. Wheeler added, “Vulnerabilities in widely-reused OSS can cause widespread problems, so the LF is working to identify such OSS so that it can be reviewed and hardened further.”
    Even if you’re using proprietary programs, the code behind it may be open source based. Synopsys has found that 99% of commercial software programs include at least one open-source component. That’s fine, but 91% of those included out of date or abandoned open-source code. That’s not good at all. 
    Malicious code can also hide in the supply chain. For instance, Wheeler stated, “most malicious code gets into applications through library ‘typosquatting.’ That is, by creating a malicious library with a name that looks like a legitimate library.” 
    For users, that means they should start asking for a software bill of materials (SBOM) so they will know exactly what it is they are using. Yes, that’s yet another argument for open source. Orion, like all proprietary software, is a black box. No one except its builders know what’s in it. And with Orion, it appears they never knew either until it blew up in their users’ faces. 
    The US National Telecommunications and Information Administration (NTIA) has been encouraging SBOM adoption. The Linux Foundation’s Software Package Data Exchange (SPDX) format is an SBOM format that’s being widely adopted. 
    Armed with SBOM information, you can examine what component versions are used in your program. Of course, this requires you to pay attention to what’s in your programs. For example, Equifax’s infamous failure was due to its simply not paying attention when a public fix was issued for the Apache Struts library it used in its programs. Equifax tried to pin the blame on Apache, but when the dust settled, Equifax admitted it was entirely at fault.
    When a program uses components with known vulnerabilities, that’s a big red flag, True, some vulnerabilities may not be exploitable, but, Wheeler states, far “too many application developers simply don’t update dependencies even when they are exploitable. To be fair, there’s a chicken-and-egg problem here: specifications are in the process of being updated, tools are in development, and many software producers aren’t ready to provide SBOMs. So users should not expect that most software producers will have SBOMs ready today. However, they do need to create a demand for SBOMs.”
    Wheeler also wants to see software distributors embracing SBOM information. “For many OSS projects this can typically be done, at least in part, by providing package management information that identifies their direct and indirect dependencies (e.g., in package.json, requirements.txt, Gemfile, Gemfile.lock, and similar files). Many tools can combine this information to create more complete SBOM information for larger systems.”
    Finally, Wheeler believes, “Organizations should invest in OpenChain conformance and require their suppliers to implement a process designed to improve trust in a supply chain.”  OpenChain is an open-source blockchain secured distributed ledger technology for tracking open-source program components and licenses. It’s also an ISO/IEC standard: 5230:2020.
    With all this in mind, you should put all the below on your software development and deployment list: 

    Harden software build environments

    Move towards verified reproducible builds 

    Change tools & interfaces so unintentional vulnerabilities are less likely

    Educate developers 

    Use vulnerability detection tools when developing software

    Use tools to detect known-vulnerable components when developing software

    Improve widely-used OSS 

    Ask for SBOMs in SPDX format. Many software producers aren’t ready to provide one yet, but creating the demand will speed progress

    Determine if subcomponents used have known vulnerabilities 

    Work towards providing SBOM information if we produce software for others

    Implement OpenChain 

    If you don’t, as Wheeler reminds us, “Those who do not learn from history are often doomed to repeat it.” Do you want your company to be the next SolarWinds? I don’t think so!
    Related Stories: More

  • in

    Cisco says it won't patch 74 security bugs in older RV routers that reached EOL

    Image: Cisco // Composition: ZDNet
    Networking equipment vendor Cisco said yesterday it was not going to release firmware updates to fix 74 vulnerabilities that had been reported in its line of RV routers, which had reached end-of-life (EOL).
    Affected devices include Cisco Small Business RV110W, RV130, RV130W, and RV215W systems, which can be used as both routers, firewalls, and VPNs.
    All four reached EOL in 2017 and 2018 and have also recently exited their last maintenance window as part of paid support contracts on December 1, 2020.
    In three security advisories posted yesterday [1, 2, 3], Cisco said that since December, it received bug reports for vulnerabilities ranging from simple denial of service issues that crashed devices to security flaws that could to used to gain access to root accounts and hijack routers.
    In total, the device maker said it received 74 bug reports but that it wouldn’t be releasing any software patches, mitigations, or workarounds as the devices had long reached EOL years before.
    Instead, the company advised that customers move operations to newer devices, such as the RV132W, RV160, or RV160W models, which provide the same features and which are still being actively supported.
    Some of the company’s customers might not like the company’s decision, but the good news is that none of the bugs disclosed today can be exploited easily.

    Cisco said that all the vulnerabilities require that an attacker have credentials for the device, which reduces the risk of having a network attacked in the coming weeks or months, giving administrators a chance to plan and prepare a migration plan to newer gear, or at least deploy their own countermeasures, otherwise.
    The CVE identifiers of the bugs Cisco declined to patch in its EOL routers are listed below:
    CVE-2021-1146
    CVE-2021-1147
    CVE-2021-1148
    CVE-2021-1149
    CVE-2021-1150
    CVE-2021-1151
    CVE-2021-1152
    CVE-2021-1153
    CVE-2021-1154
    CVE-2021-1155
    CVE-2021-1156
    CVE-2021-1157
    CVE-2021-1158
    CVE-2021-1159
    CVE-2021-1160
    CVE-2021-1161
    CVE-2021-1162
    CVE-2021-1163
    CVE-2021-1164
    CVE-2021-1165
    CVE-2021-1166
    CVE-2021-1167
    CVE-2021-1168
    CVE-2021-1169
    CVE-2021-1170
    CVE-2021-1171
    CVE-2021-1172
    CVE-2021-1173
    CVE-2021-1174
    CVE-2021-1175
    CVE-2021-1176
    CVE-2021-1177
    CVE-2021-1178
    CVE-2021-1179
    CVE-2021-1180
    CVE-2021-1181
    CVE-2021-1182
    CVE-2021-1183
    CVE-2021-1184
    CVE-2021-1185
    CVE-2021-1186
    CVE-2021-1187
    CVE-2021-1188
    CVE-2021-1189
    CVE-2021-1190
    CVE-2021-1191
    CVE-2021-1192
    CVE-2021-1193
    CVE-2021-1194
    CVE-2021-1195
    CVE-2021-1196
    CVE-2021-1197
    CVE-2021-1198
    CVE-2021-1199
    CVE-2021-1200
    CVE-2021-1201
    CVE-2021-1202
    CVE-2021-1203
    CVE-2021-1204
    CVE-2021-1205
    CVE-2021-1206
    CVE-2021-1207
    CVE-2021-1208
    CVE-2021-1209
    CVE-2021-1210
    CVE-2021-1211
    CVE-2021-1212
    CVE-2021-1213
    CVE-2021-1214
    CVE-2021-1215
    CVE-2021-1216
    CVE-2021-1217
    CVE-2021-1307
    CVE-2021-1360 More

  • in

    Switching to Signal? Turn on these settings now for greater privacy and security

    Many people are making the switch from WhatsApp to Signal. Many are switching because of the increased privacy and security that Signal offers.
    But with a few simple tweaks, did you know that you can make Signal even more secure?
    Must read: WhatsApp vs. Signal vs. Telegram vs. Facebook: What data do they have about you?
    There are a few settings I suggest you enable. There are some cosmetic differences between the iOS and Android versions of Signal, but these tips apply to both platforms.
    The first place you should head over to is the Settings screen. To get there, tap on your initials in the top-left corner of the screen (on Android you can also tap the three dots on the top-left and then Settings).
    There are three settings on iOS and four on Android I recommend turning on, and a few others worth taking a look at.
    Screen Lock (iOS and Android): Means you have to enter your biometrics (Face ID, Touch ID, fingerprint or passcode) to access the app
    Enable Screen Security (iOS) or Screen Security (Android): On the iPhone this prevents data previews being shown in the app switcher, while on Android it prevents screenshots being taken
    Registration Lock (iOS and Android):  Requires your PIN when registering with Signal (a handy way to prevent a second device being added)
    Incognito Keyboard (Android only): Prevents the keyboard from sending what you type to a third-party, which might allow sensitive data to leak
    While you’re here, Always Relay Calls, a feature which takes all your calls through a Signal server, thus hiding your IP address from the recipient, might be worth enabling, but it does degrade call quality.

    On top of this, I suggest that you tame notifications, especially if you are worried about shoulder surfers seeing your messages.
    To do this, head back to the main Settings screen and go to Notifications. For Show, change to No Name or Content for iOS and No name or message for Android.
    The iOS version of Signal also has a feature called Censorship Circumvention under Advanced, which is handy if you live in an area where there is active internet censorship happening which blocks Signal. If this does not apply to you, leave this off. More

  • in

    Apple removes feature that allowed its apps to bypass macOS firewalls and VPNs

    Image: Markus Spiske
    Apple has removed a controversial feature from the macOS operating system that allowed 53 of Apple’s own apps to bypass third-party firewalls, security tools, and VPN apps installed by users for their protection.
    Known as the ContentFilterExclusionList, the list was included in macOS 11, also known as Big Sur.
    The exclusion list included some of Apple’s biggest apps, like the App Store, Maps, and iCloud, and was physically located on disk at: /System/Library/Frameworks/NetworkExtension.framework/Versions/Current/Resources/Info.plist.

    Image: Simone Margaritelli
    Its presence was discovered last October by several security researchers and app makers who realized that their security tools weren’t able to filter or inspect traffic for some of Apple’s applications.
    Security researchers such as Patrick Wardle, and others, were quick to point out at the time that this exclusion risk was a security nightmare waiting to happen. They argued that malware could latch on to legitimate Apple apps included on the list and then bypass firewalls and security software.
    Besides security pros, the exclusion list was widely panned by privacy experts alike, since macOS users also risked exposing their real IP address and location when using Apple apps, as VPN products wouldn’t be able to mask users’ location.
    Apple said it was temporary
    Contacted for comment at the time, Apple told ZDNet the list was temporary but did not provide any details. An Apple software engineer later told ZDNet the list was the result of a series of bugs in Apple apps, rather than anything nefarious from the Cupertino-based company.

    The bugs were related to Apple deprecating network kernel extensions (NKEs) in Big Sur and introducing a new system called Network Extension Framework, and Apple engineers not having enough time to iron out all the bugs before the Big Sur launch last fall.
    But some of these bugs have been slowly fixed in the meantime, and, yesterday, with the release of macOS Big Sur 11.2 beta 2, Apple has felt it was safe to remove the ContentFilterExclusionList from the OS code (as spotted by Wardle earlier today).
    Once Big Sur 11.2 is released, all Apple apps will once again be subject to firewalls and security tools, and they’ll be compatible with VPN apps. More

  • in

    Trump ban: No ‘moment for celebration’ in the eyes of Twitter chief

    “I do not celebrate or feel pride in our having to ban @realDonaldTrump from Twitter, or how we got here,” Twitter CEO Jack Dorsey said on Thursday. 

    The ban was the final moment in a long journey for the microblogging platform when it comes to US President Donald Trump. 
    Since Trump took office, if you wanted to know what was on the mind of the US president, you would turn not to official White House channels but instead visit the @realDonaldTrump Twitter feed. 
    While Trump’s off-the-cuff remarks have sometimes been nothing more than a source of amusement — such as the covfefe situation and musings on purchasing Greenland — the results of Trump’s expanded outreach, made possible through social media, took a more sinister turn as the latest US election began, mainly focused on allegations that the election was subject to fraud. 
    This was, perhaps, the first time in history that a leading political official used an unfiltered channel to speak to supporters and critics alike with such frequent dedication. As ZDNet’s David Gewirtz notes, Trump has tweeted close to 60,000 times since 2009, 34,000 times of which were from the day he declared himself a presidential candidate.  
    When a major political figure elects to use a private company as a sounding board for their thoughts, broadcasting them to roughly 88 million people without any form of official review or censoring, the world takes note. 
    Words matter, as we saw in the attack on the US Capitol building, and this has become a hard lesson for Twitter to digest.

    As rioters took selfies, rifled through offices, caused substantial damage, stole items, and caused injury, it was not just law enforcement that stood to attention — it was private technology companies, too. 
    Suddenly finding themselves at the heart of insurrection, after years of being used as communication channels by the US president, now impeached for the second time, Twitter and Facebook — alongside other companies — were also forced to act.   
    On the day of the attack on the Capitol, Trump attended a “Save America” rally, claiming once again that the election had been stolen, adding that “If you don’t fight like hell you’re not going to have a country anymore.” 
    While the president also said, “I know that everyone here will soon be marching over to the Capitol building to peacefully and patriotically make your voices heard,” a comment arguably showing that Trump did not support the destructive actions of those who participated in the Capitol attack, it was the content later posted to Twitter that finally forced the platform to make its own sentiments known. 
    Hours after the riot began, the world was waiting for the US president to break his silence. In a typical fashion, Twitter was chosen as the platform, and in a video posted to his feed, Trump said:

    “Go home. We love you, you’re very special. You’ve seen what happens, you’ve seen the way others are treated that are so bad and so evil. I know how you feel.”

    There was, perhaps, no other moment that reflected so strongly how a technology company had become the gatekeeper and mouthpiece to a political behemoth and a factor in potential threats to public safety. 
    Trump has been accused of having “blood on his hands” by “inciting the insurrection.” If Twitter did not act, and more content was posted that encouraged the actions of his supporters further, the company may have been labeled in a similar way as the conduit to further unrest.   
    Twitter cut Trump off, suspending his account pending review. The company has now permanently banned him from the network and also appears to be monitoring the official @POTUS handle for any signs that Trump is attempting to post from it. 
    Facebook and Instagram have suspended his accounts until at least Inauguration Day when President-elect Joe Biden is expected to take office. Snapchat, YouTube, and Twitch have followed suit. 
    Two tweets posted by the president were considered “likely to inspire others to replicate the violent acts that took place on January 6, 2021, and that there are multiple indicators that they are being received and understood as encouragement to do so,” Twitter said, leading to the ban. 
    Now, Twitter’s chief has gone into further detail as to why the suspension of an account belonging to a president had to take place, saying that the ban was “the right decision for Twitter.”
    It was only a tweet, but you could almost feel the resignation in the tone of Dorsey’s explanation. 
    “I do not celebrate or feel pride in our having to ban [Trump],” the CEO said. “We made a decision with the best information we had based on threats to physical safety both on and off Twitter.”
    “We faced an extraordinary and untenable circumstance, forcing us to focus all of our actions on public safety,” Dorsey added. “Offline harm as a result of online speech is demonstrably real, and what drives our policy and enforcement above all.”
    However, the executive also said that the need to remove the US president’s channel highlighted “a failure of ours ultimately to promote healthy conversation.”
    This, perhaps, is when a company that began its journey as a provider of a platform for open and free discourse makes the transition into a hub for how politics, beliefs, and actions are influenced — and is forced to face what the ramifications on a nationwide — or global — scale could be. 

    Influencers can use their channels to tout products and make a quick buck; conspiracy theories can run rampant, anti-vaxxers can swap stories and claims, and now political leaders can use their social media outreach to spur followers into action, potentially with fatal consequences, as the deaths linked to the Capitol attack show. 
    Opinions are divided. Twitter has been accused of double standards and targeted censorship by removing Trump but allowing other malicious content to spread, whereas others have applauded the decision as overdue.
    Some agencies, including the civil rights outfit the Electronic Frontier Foundation, say that the company was simply exercising its rights as a private company with user terms of service — but adds that more needs to be done to maintain a balanced and transparent approach. 
    “A platform should not apply one set of rules to most of its users, and then apply a more permissive set of rules to politicians and world leaders who are already immensely powerful,” the EFF said in a statement. “Instead, they should be precisely as judicious about removing the content of ordinary users as they have been to date regarding heads of state.” 
    When a corporate entity has the power to silence voices that can turn the tide of public discourse, this is also a heavy responsibility — and one that, perhaps, private companies should not have in the first place. 
    Private companies have intervened in politics and law for decades through lobbying. However, it may be the sudden and deeply impactful example of corporate power on the political scene, by silencing Trump in such an immediate and public fashion, which has changed the discussion concerning free speech, censorship, and where lines should be drawn.
    “Having to take these actions fragment the public conversation,” Dorsey said. “They divide us. They limit the potential for clarification, redemption, and learning. And sets a precedent I feel is dangerous: the power an individual or corporation has over a part of the global public conversation.”
    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Phishing warning: These are the brands most likely to be impersonated by crooks, so stay alert

    Almost half of all phishing attacks designed to steal login credentials like email addresses and passwords by imitating well-known brands are impersonating Microsoft.
    Cybersecurity researchers at Check Point analysed phishing emails sent over the last three months and found that 43% of all phishing attempts mimicking brands were attempting to pass themselves off as messages from Microsoft.

    More on privacy

    Microsoft is a popular lure because of Office 365’s wide distribution among enterprises. By stealing these credentials, criminals hope to gain access to corporate networks.
    SEE: Security Awareness and Training policy (TechRepublic Premium)
    And with many organisations shifting towards remote working to ensure social distancing over the course of the last year, email and online messaging have become even more important to businesses – and that’s something cyber attackers are actively looking to exploit.
    Not only are employees relying on emails for everyday communication with their team mates and bosses, they also don’t always have the same security awareness and protection while working from home.
    With these attacks, even if the messages aren’t designed to look like they come from Microsoft itself, and they could claim to come from a colleague, HR, a supplier or anyone else the person might come into contact with, the phishing link or attachment will ask the user to enter their login details to ‘verify’ their identify.

    If the email address and password are entered into these pages designed to look like a Microsoft login site, the attackers are able to steal them. Stolen credentials can be used to gain further access to the compromised network, or they can be sold on to other cyber criminals on dark web marketplaces.
    The second most commonly imitated brand during the period of analysis was DHL, with attacks mimicking the logistics provider accounting for 18% of all brand-phishing attempts. DHL has become a popular phishing lure for criminals because many people are now stuck at home due to COVID-19 restrictions and receiving more deliveries – so people are more likely to let their guard down when they see messages claiming to be from a delivery firm.
    SEE: Ransomware victims aren’t reporting attacks to police. That’s causing a big problem
    Other brands commonly impersonated in phishing emails include LinkedIn, Amazon, Google, PayPal and Yahoo. Compromising any of these accounts could provide cyber criminals with access to sensitive personal information that they could exploit.
    “Criminals increased their attempts in Q4 2020 to steal peoples’ personal data by impersonating leading brands, and our data clearly shows how they change their phishing tactics to increase their chances of success,” said Maya Horowitz, director of threat intelligence and research at Check Point.
    “As always, we encourage users to be cautious when divulging personal data and credentials to business applications, and to think twice before opening email attachments or links, especially emails that claim to from companies, such as Microsoft or Google, that are most likely to be impersonated,” she added.
    It’s also possible to provide an extra layer of protection to Microsoft Office 365 and other corporate accounts by applying two-factor authentication, so that even if cyber criminals manage to steal the username and password, the extra layer of verification required by two-factor authentication will help to keep the account safe.
    MORE ON CYBERSECURITY More

  • in

    Scam-as-a-Service operation made more than $6.5 million in 2020

    Image: Group-IB
    A newly uncovered Russian-based cybercrime operation has helped classified ads scammers steal more than $6.5 million from buyers across the US, Europe, and former Soviet states.

    In a report published today, cyber-security firm Group-IB has delved into this operation, which the company has described as a Scam-as-a-Service and codenamed Classiscam.
    According to the report, the Classiscam scheme began in early 2019 and initially only targeted buyers active on Russian online marketplaces and classified ads portals.
    The group expanded to other countries only last year after they began recruiting scammers who could target and have conversations with foreign-language customers. Currently, Classiscam is active in more than a dozen countries and on foreign marketplace and courier services such as Leboncoin, Allegro, OLX, FAN Courier, Sbazar, DHL, and others.
    How Classiscam operates
    But despite the wide targeting, Classiscam’s modus operandi follows a similar pattern —adapted for each site— and revolvs around publishing ads for non-existing products on online marketplaces.
    “The ads usually offer cameras, game consoles, laptops, smartphones, and similar items for sale at deliberately low prices,” Group-IB said today.
    Once users are interested and contact the vendor (scammer), the Classiscam operator would request the buyer to provide details to arrange the product’s delivery.

    The scammer would then use a Telegram bot to generate a phishing page that mimicked the original marketplace but was hosted on a look-a-like domain. The scammer would send the link to the buyer, who would fill it with their payment details.

    Image: Group-IB
    Once the victim provided the payment details, the scammers would take the data and attempt to use it elsewhere to purchase other products.
    More than 40 Classiscam groups active today
    Group-IB said that the entire operation was very well organized, with “admins” at the top, followed by “workers,” and “callers.”
    Admins had the easiest job in the scheme, managing the Telegram bots, creating the fake ads, and recruiting “workers,” both inside Russia and abroad.
    Workers were the people who interacted with victims directly, doing most of the work, generating the individual phishing links, and making sure payments were made.
    Callers had the smallest part in the scheme, acting as support specialists and having conversations with victims over the phone in case any suspected anything or had technical problems.

    Image: Group-IB
    Based on the number of Telegram bots it discovered, Group-IB believes there are more than 40 different groups currently using Classiscam’s services.
    Half of the groups run scams on Russian sites, while the other half target users in Bulgaria, the Czech Republic, France, Poland, Romania, the US, and post-Soviet countries.
    Group-IB said that more than 5,000 users (working as scammers) were registered in these 40+ Telegram chats at the end of 2020.
    The security firm estimates that on average, each of these groups makes around $61,000/month, while the entire Classiscam operation makes around $522,000/month in total.
    “So far, the scam’s expansion in Europe is hindered by language barriers and difficulties with cashing our stolen money abroad,” said Dmitriy Tiunkin, Head of Group-IB Digital Risk Protection Department, Europe. “Once the scammers overcome these barriers, Classiscam will spread in the West.” More

  • in

    Ring trials customer video end-to-end encryption for smart doorbells

    Ring has launched a technical preview of video end-to-end encryption to bolster the security of home video feeds.

    This week, the Amazon-owned smart doorbell maker said the feature is currently being rolled out to customers in order to elicit feedback, and if it proves to be successful, end-to-end video encryption could eventually be offered to users that want to add an “additional layer of security to their videos” as an opt-in feature. 
    “We will continue to innovate and invest in features that empower our neighbors with the ability to easily view, understand, and manage how their videos and information stay secure with Ring,” the company says. 
    End-to-end encryption aims to protect data from being hijacked, read, or otherwise compromised by preventing anyone other than an intended recipient from being able to unlock and decrypt information — whether this is messages, video feeds, or other content.
    Ring says that videos are already encrypted in transit — when footage is uploaded to the cloud — and also when at rest, which is when footage is stored on Ring servers. However, the new feature will implement encryption at the home level, which can only be recovered by using a key stored locally on user mobile devices. 
    The company says the feature has been “designed so that only the customer can decrypt and view recordings on their enrolled device.”
    In order to enable the feature for Ring devices, users involved in the rollout can select this option from the Video Encryption page in the Ring app’s control center. 

    Ring has come under fire in recent months due to security concerns. In December 2020, a class-action lawsuit was filed against Ring following “dozens” of customers experiencing death threats, blackmail attempts, and verbal attacks. The lawsuit claims that shoddy security opened the door for their devices to be hijacked by harassers, leading to distress and invasions of privacy.
    As noted by sister site CNET, Ring confirmed that any end-to-end encrypted videos cannot be viewed by Ring, Amazon, or any law enforcement official. If the feature is enabled, this also impacts the Ring Neighbor program, in which customers can voluntarily share video feeds with law enforcement — as end-to-end encrypted footage will not be viewable. 
    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More