More stories

  • in

    Switching to Signal? Turn on these settings now for greater privacy and security

    Many people are making the switch from WhatsApp to Signal. Many are switching because of the increased privacy and security that Signal offers.
    But with a few simple tweaks, did you know that you can make Signal even more secure?
    Must read: WhatsApp vs. Signal vs. Telegram vs. Facebook: What data do they have about you?
    There are a few settings I suggest you enable. There are some cosmetic differences between the iOS and Android versions of Signal, but these tips apply to both platforms.
    The first place you should head over to is the Settings screen. To get there, tap on your initials in the top-left corner of the screen (on Android you can also tap the three dots on the top-left and then Settings).
    There are three settings on iOS and four on Android I recommend turning on, and a few others worth taking a look at.
    Screen Lock (iOS and Android): Means you have to enter your biometrics (Face ID, Touch ID, fingerprint or passcode) to access the app
    Enable Screen Security (iOS) or Screen Security (Android): On the iPhone this prevents data previews being shown in the app switcher, while on Android it prevents screenshots being taken
    Registration Lock (iOS and Android):  Requires your PIN when registering with Signal (a handy way to prevent a second device being added)
    Incognito Keyboard (Android only): Prevents the keyboard from sending what you type to a third-party, which might allow sensitive data to leak
    While you’re here, Always Relay Calls, a feature which takes all your calls through a Signal server, thus hiding your IP address from the recipient, might be worth enabling, but it does degrade call quality.

    On top of this, I suggest that you tame notifications, especially if you are worried about shoulder surfers seeing your messages.
    To do this, head back to the main Settings screen and go to Notifications. For Show, change to No Name or Content for iOS and No name or message for Android.
    The iOS version of Signal also has a feature called Censorship Circumvention under Advanced, which is handy if you live in an area where there is active internet censorship happening which blocks Signal. If this does not apply to you, leave this off. More

  • in

    Apple removes feature that allowed its apps to bypass macOS firewalls and VPNs

    Image: Markus Spiske
    Apple has removed a controversial feature from the macOS operating system that allowed 53 of Apple’s own apps to bypass third-party firewalls, security tools, and VPN apps installed by users for their protection.
    Known as the ContentFilterExclusionList, the list was included in macOS 11, also known as Big Sur.
    The exclusion list included some of Apple’s biggest apps, like the App Store, Maps, and iCloud, and was physically located on disk at: /System/Library/Frameworks/NetworkExtension.framework/Versions/Current/Resources/Info.plist.

    Image: Simone Margaritelli
    Its presence was discovered last October by several security researchers and app makers who realized that their security tools weren’t able to filter or inspect traffic for some of Apple’s applications.
    Security researchers such as Patrick Wardle, and others, were quick to point out at the time that this exclusion risk was a security nightmare waiting to happen. They argued that malware could latch on to legitimate Apple apps included on the list and then bypass firewalls and security software.
    Besides security pros, the exclusion list was widely panned by privacy experts alike, since macOS users also risked exposing their real IP address and location when using Apple apps, as VPN products wouldn’t be able to mask users’ location.
    Apple said it was temporary
    Contacted for comment at the time, Apple told ZDNet the list was temporary but did not provide any details. An Apple software engineer later told ZDNet the list was the result of a series of bugs in Apple apps, rather than anything nefarious from the Cupertino-based company.

    The bugs were related to Apple deprecating network kernel extensions (NKEs) in Big Sur and introducing a new system called Network Extension Framework, and Apple engineers not having enough time to iron out all the bugs before the Big Sur launch last fall.
    But some of these bugs have been slowly fixed in the meantime, and, yesterday, with the release of macOS Big Sur 11.2 beta 2, Apple has felt it was safe to remove the ContentFilterExclusionList from the OS code (as spotted by Wardle earlier today).
    Once Big Sur 11.2 is released, all Apple apps will once again be subject to firewalls and security tools, and they’ll be compatible with VPN apps. More

  • in

    Trump ban: No ‘moment for celebration’ in the eyes of Twitter chief

    “I do not celebrate or feel pride in our having to ban @realDonaldTrump from Twitter, or how we got here,” Twitter CEO Jack Dorsey said on Thursday. 

    The ban was the final moment in a long journey for the microblogging platform when it comes to US President Donald Trump. 
    Since Trump took office, if you wanted to know what was on the mind of the US president, you would turn not to official White House channels but instead visit the @realDonaldTrump Twitter feed. 
    While Trump’s off-the-cuff remarks have sometimes been nothing more than a source of amusement — such as the covfefe situation and musings on purchasing Greenland — the results of Trump’s expanded outreach, made possible through social media, took a more sinister turn as the latest US election began, mainly focused on allegations that the election was subject to fraud. 
    This was, perhaps, the first time in history that a leading political official used an unfiltered channel to speak to supporters and critics alike with such frequent dedication. As ZDNet’s David Gewirtz notes, Trump has tweeted close to 60,000 times since 2009, 34,000 times of which were from the day he declared himself a presidential candidate.  
    When a major political figure elects to use a private company as a sounding board for their thoughts, broadcasting them to roughly 88 million people without any form of official review or censoring, the world takes note. 
    Words matter, as we saw in the attack on the US Capitol building, and this has become a hard lesson for Twitter to digest.

    As rioters took selfies, rifled through offices, caused substantial damage, stole items, and caused injury, it was not just law enforcement that stood to attention — it was private technology companies, too. 
    Suddenly finding themselves at the heart of insurrection, after years of being used as communication channels by the US president, now impeached for the second time, Twitter and Facebook — alongside other companies — were also forced to act.   
    On the day of the attack on the Capitol, Trump attended a “Save America” rally, claiming once again that the election had been stolen, adding that “If you don’t fight like hell you’re not going to have a country anymore.” 
    While the president also said, “I know that everyone here will soon be marching over to the Capitol building to peacefully and patriotically make your voices heard,” a comment arguably showing that Trump did not support the destructive actions of those who participated in the Capitol attack, it was the content later posted to Twitter that finally forced the platform to make its own sentiments known. 
    Hours after the riot began, the world was waiting for the US president to break his silence. In a typical fashion, Twitter was chosen as the platform, and in a video posted to his feed, Trump said:

    “Go home. We love you, you’re very special. You’ve seen what happens, you’ve seen the way others are treated that are so bad and so evil. I know how you feel.”

    There was, perhaps, no other moment that reflected so strongly how a technology company had become the gatekeeper and mouthpiece to a political behemoth and a factor in potential threats to public safety. 
    Trump has been accused of having “blood on his hands” by “inciting the insurrection.” If Twitter did not act, and more content was posted that encouraged the actions of his supporters further, the company may have been labeled in a similar way as the conduit to further unrest.   
    Twitter cut Trump off, suspending his account pending review. The company has now permanently banned him from the network and also appears to be monitoring the official @POTUS handle for any signs that Trump is attempting to post from it. 
    Facebook and Instagram have suspended his accounts until at least Inauguration Day when President-elect Joe Biden is expected to take office. Snapchat, YouTube, and Twitch have followed suit. 
    Two tweets posted by the president were considered “likely to inspire others to replicate the violent acts that took place on January 6, 2021, and that there are multiple indicators that they are being received and understood as encouragement to do so,” Twitter said, leading to the ban. 
    Now, Twitter’s chief has gone into further detail as to why the suspension of an account belonging to a president had to take place, saying that the ban was “the right decision for Twitter.”
    It was only a tweet, but you could almost feel the resignation in the tone of Dorsey’s explanation. 
    “I do not celebrate or feel pride in our having to ban [Trump],” the CEO said. “We made a decision with the best information we had based on threats to physical safety both on and off Twitter.”
    “We faced an extraordinary and untenable circumstance, forcing us to focus all of our actions on public safety,” Dorsey added. “Offline harm as a result of online speech is demonstrably real, and what drives our policy and enforcement above all.”
    However, the executive also said that the need to remove the US president’s channel highlighted “a failure of ours ultimately to promote healthy conversation.”
    This, perhaps, is when a company that began its journey as a provider of a platform for open and free discourse makes the transition into a hub for how politics, beliefs, and actions are influenced — and is forced to face what the ramifications on a nationwide — or global — scale could be. 

    Influencers can use their channels to tout products and make a quick buck; conspiracy theories can run rampant, anti-vaxxers can swap stories and claims, and now political leaders can use their social media outreach to spur followers into action, potentially with fatal consequences, as the deaths linked to the Capitol attack show. 
    Opinions are divided. Twitter has been accused of double standards and targeted censorship by removing Trump but allowing other malicious content to spread, whereas others have applauded the decision as overdue.
    Some agencies, including the civil rights outfit the Electronic Frontier Foundation, say that the company was simply exercising its rights as a private company with user terms of service — but adds that more needs to be done to maintain a balanced and transparent approach. 
    “A platform should not apply one set of rules to most of its users, and then apply a more permissive set of rules to politicians and world leaders who are already immensely powerful,” the EFF said in a statement. “Instead, they should be precisely as judicious about removing the content of ordinary users as they have been to date regarding heads of state.” 
    When a corporate entity has the power to silence voices that can turn the tide of public discourse, this is also a heavy responsibility — and one that, perhaps, private companies should not have in the first place. 
    Private companies have intervened in politics and law for decades through lobbying. However, it may be the sudden and deeply impactful example of corporate power on the political scene, by silencing Trump in such an immediate and public fashion, which has changed the discussion concerning free speech, censorship, and where lines should be drawn.
    “Having to take these actions fragment the public conversation,” Dorsey said. “They divide us. They limit the potential for clarification, redemption, and learning. And sets a precedent I feel is dangerous: the power an individual or corporation has over a part of the global public conversation.”
    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Phishing warning: These are the brands most likely to be impersonated by crooks, so stay alert

    Almost half of all phishing attacks designed to steal login credentials like email addresses and passwords by imitating well-known brands are impersonating Microsoft.
    Cybersecurity researchers at Check Point analysed phishing emails sent over the last three months and found that 43% of all phishing attempts mimicking brands were attempting to pass themselves off as messages from Microsoft.

    More on privacy

    Microsoft is a popular lure because of Office 365’s wide distribution among enterprises. By stealing these credentials, criminals hope to gain access to corporate networks.
    SEE: Security Awareness and Training policy (TechRepublic Premium)
    And with many organisations shifting towards remote working to ensure social distancing over the course of the last year, email and online messaging have become even more important to businesses – and that’s something cyber attackers are actively looking to exploit.
    Not only are employees relying on emails for everyday communication with their team mates and bosses, they also don’t always have the same security awareness and protection while working from home.
    With these attacks, even if the messages aren’t designed to look like they come from Microsoft itself, and they could claim to come from a colleague, HR, a supplier or anyone else the person might come into contact with, the phishing link or attachment will ask the user to enter their login details to ‘verify’ their identify.

    If the email address and password are entered into these pages designed to look like a Microsoft login site, the attackers are able to steal them. Stolen credentials can be used to gain further access to the compromised network, or they can be sold on to other cyber criminals on dark web marketplaces.
    The second most commonly imitated brand during the period of analysis was DHL, with attacks mimicking the logistics provider accounting for 18% of all brand-phishing attempts. DHL has become a popular phishing lure for criminals because many people are now stuck at home due to COVID-19 restrictions and receiving more deliveries – so people are more likely to let their guard down when they see messages claiming to be from a delivery firm.
    SEE: Ransomware victims aren’t reporting attacks to police. That’s causing a big problem
    Other brands commonly impersonated in phishing emails include LinkedIn, Amazon, Google, PayPal and Yahoo. Compromising any of these accounts could provide cyber criminals with access to sensitive personal information that they could exploit.
    “Criminals increased their attempts in Q4 2020 to steal peoples’ personal data by impersonating leading brands, and our data clearly shows how they change their phishing tactics to increase their chances of success,” said Maya Horowitz, director of threat intelligence and research at Check Point.
    “As always, we encourage users to be cautious when divulging personal data and credentials to business applications, and to think twice before opening email attachments or links, especially emails that claim to from companies, such as Microsoft or Google, that are most likely to be impersonated,” she added.
    It’s also possible to provide an extra layer of protection to Microsoft Office 365 and other corporate accounts by applying two-factor authentication, so that even if cyber criminals manage to steal the username and password, the extra layer of verification required by two-factor authentication will help to keep the account safe.
    MORE ON CYBERSECURITY More

  • in

    Scam-as-a-Service operation made more than $6.5 million in 2020

    Image: Group-IB
    A newly uncovered Russian-based cybercrime operation has helped classified ads scammers steal more than $6.5 million from buyers across the US, Europe, and former Soviet states.

    In a report published today, cyber-security firm Group-IB has delved into this operation, which the company has described as a Scam-as-a-Service and codenamed Classiscam.
    According to the report, the Classiscam scheme began in early 2019 and initially only targeted buyers active on Russian online marketplaces and classified ads portals.
    The group expanded to other countries only last year after they began recruiting scammers who could target and have conversations with foreign-language customers. Currently, Classiscam is active in more than a dozen countries and on foreign marketplace and courier services such as Leboncoin, Allegro, OLX, FAN Courier, Sbazar, DHL, and others.
    How Classiscam operates
    But despite the wide targeting, Classiscam’s modus operandi follows a similar pattern —adapted for each site— and revolvs around publishing ads for non-existing products on online marketplaces.
    “The ads usually offer cameras, game consoles, laptops, smartphones, and similar items for sale at deliberately low prices,” Group-IB said today.
    Once users are interested and contact the vendor (scammer), the Classiscam operator would request the buyer to provide details to arrange the product’s delivery.

    The scammer would then use a Telegram bot to generate a phishing page that mimicked the original marketplace but was hosted on a look-a-like domain. The scammer would send the link to the buyer, who would fill it with their payment details.

    Image: Group-IB
    Once the victim provided the payment details, the scammers would take the data and attempt to use it elsewhere to purchase other products.
    More than 40 Classiscam groups active today
    Group-IB said that the entire operation was very well organized, with “admins” at the top, followed by “workers,” and “callers.”
    Admins had the easiest job in the scheme, managing the Telegram bots, creating the fake ads, and recruiting “workers,” both inside Russia and abroad.
    Workers were the people who interacted with victims directly, doing most of the work, generating the individual phishing links, and making sure payments were made.
    Callers had the smallest part in the scheme, acting as support specialists and having conversations with victims over the phone in case any suspected anything or had technical problems.

    Image: Group-IB
    Based on the number of Telegram bots it discovered, Group-IB believes there are more than 40 different groups currently using Classiscam’s services.
    Half of the groups run scams on Russian sites, while the other half target users in Bulgaria, the Czech Republic, France, Poland, Romania, the US, and post-Soviet countries.
    Group-IB said that more than 5,000 users (working as scammers) were registered in these 40+ Telegram chats at the end of 2020.
    The security firm estimates that on average, each of these groups makes around $61,000/month, while the entire Classiscam operation makes around $522,000/month in total.
    “So far, the scam’s expansion in Europe is hindered by language barriers and difficulties with cashing our stolen money abroad,” said Dmitriy Tiunkin, Head of Group-IB Digital Risk Protection Department, Europe. “Once the scammers overcome these barriers, Classiscam will spread in the West.” More

  • in

    Ring trials customer video end-to-end encryption for smart doorbells

    Ring has launched a technical preview of video end-to-end encryption to bolster the security of home video feeds.

    This week, the Amazon-owned smart doorbell maker said the feature is currently being rolled out to customers in order to elicit feedback, and if it proves to be successful, end-to-end video encryption could eventually be offered to users that want to add an “additional layer of security to their videos” as an opt-in feature. 
    “We will continue to innovate and invest in features that empower our neighbors with the ability to easily view, understand, and manage how their videos and information stay secure with Ring,” the company says. 
    End-to-end encryption aims to protect data from being hijacked, read, or otherwise compromised by preventing anyone other than an intended recipient from being able to unlock and decrypt information — whether this is messages, video feeds, or other content.
    Ring says that videos are already encrypted in transit — when footage is uploaded to the cloud — and also when at rest, which is when footage is stored on Ring servers. However, the new feature will implement encryption at the home level, which can only be recovered by using a key stored locally on user mobile devices. 
    The company says the feature has been “designed so that only the customer can decrypt and view recordings on their enrolled device.”
    In order to enable the feature for Ring devices, users involved in the rollout can select this option from the Video Encryption page in the Ring app’s control center. 

    Ring has come under fire in recent months due to security concerns. In December 2020, a class-action lawsuit was filed against Ring following “dozens” of customers experiencing death threats, blackmail attempts, and verbal attacks. The lawsuit claims that shoddy security opened the door for their devices to be hijacked by harassers, leading to distress and invasions of privacy.
    As noted by sister site CNET, Ring confirmed that any end-to-end encrypted videos cannot be viewed by Ring, Amazon, or any law enforcement official. If the feature is enabled, this also impacts the Ring Neighbor program, in which customers can voluntarily share video feeds with law enforcement — as end-to-end encrypted footage will not be viewable. 
    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Australian Home Affairs Minister takes issue with EU Electronic Communications Code

    The Australian government, alongside counterparts from Canada, New Zealand, the United Kingdom, and the United States, have rallied together to declare that the unintended consequences of the new European Electronic Communications Code are putting children at risk.
    The new code came into effect in the European Union on 21 December 2020 and is aimed at harmonising the existing legal framework for electronic communications across the EU.
    It introduced a new broader definition of “electronic communications services”, which compels service providers operating in the EU to comply with the rules of the ePrivacy Directive.
    As a result, many over-the-top (OTT) providers and various other telecommunications services that did not previously fall within the definition of the code now do. Australian Minister for Home Affairs Peter Dutton said this would inadvertently make it easier for criminals to abuse children online.
    “Under the new Code, it is now illegal for electronic service providers, including social media companies, operating in the EU to continue to use the necessary tools to detect child sexual abuse material on online platforms and services,” a statement from Dutton said.
    The minister responsible for children in Australian immigration detention facilities said protecting children is the “most important thing we can do as a global community”. He said user privacy should not come at the expense of children’s safety.
    “It is essential that the European Parliament acts urgently and agrees to exempt certain technologies from the ‘ePrivacy Directive’ and preserve companies’ ability to detect and prevent child sexual abuse. This cannot wait,” the statement continues. “We support European Union measures that will allow for the continuation and expansion of the current efforts to keep children safe online.”

    The statement from the five countries said it is essential that the European Union urgently adopt the derogation to the ePrivacy Directive as proposed by the European Commission in order for the essential work carried out by service providers to shield endangered children in Europe and around the world to continue.  
    “The European Union has a unique role to play in the global fight against online child sexual exploitation. It is essential that the European Union adopt measures that ensure not only the legal authority, but also the practical ability, for providers to use tools to detect online child sexual exploitation,” it reads.
    Dutton pointed to the Voluntary Principles to Counter Online Child Sexual Exploitation and Abuse, which, launched in March 2020, provides a set of 11 actions that tech firms have voluntarily agreed to follow in order to prevent child predators from targeting kids on their platforms.
    “The Voluntary Principles rely on the continuation of companies’ legal and technical ability to identify and take action against child sexual abuse on their platforms,” Dutton said.
    He also pointed to the signing of the International Statement: End-to-End Encryption and Public Safety in October 2020 by the Australian, Canadian, New Zealand, UK, US, Indian, and Japanese governments, saying the countries have been working closely with the world’s largest tech companies to implore companies to better protect children online.
    “The introduction of the Code could undermine this progress and prevent tech companies from using some of the most powerful tools available to combat child abuse on their platforms,” Dutton said.
    MORE FROM AUSTRALIA More

  • in

    Guest Mode now available on Google Assistant

    Google has introduced Guest Mode to Google Assistant to give users the chance to ensure their interactions with their Google smart speakers or displays, including Nest Audio and Nest Hub Max, are not saved to their account when this new mode is switched on.
    When Guest Mode is switched on, users will be able to continue to ask questions, control smart home devices, set timers, and play music, but will not be able to access personal results, such as calendar entries or contacts, until Guest Mode is switched off.
    Google added the device will also automatically delete audio recordings and Google Assistant activity from the device owner’s account when in Guest Mode.
    However, if users are interacting with other apps and services, such as Google Maps, YouTube, or other media and smart home services while in Guest Mode, those apps may still save that activity, Google said.
    To switch on Guest Mode, it is a matter of users saying, “Hey Google, turn on Guest Mode”, before the device plays a special chime and a guest icon is displayed. To be switched off, users just have to ask Google to turn off the feature.
    Users can also check if their device is still on Guest Mode by asking Google, “Is Guest Mode on?”
    Google product manager Philippe de Lurand Pierre-Paul said Guest Mode was designed to give users more privacy controls, suggesting it could come in handy when guests are over and don’t want their interactions saved to a user’s existing account.

    “Google Assistant is designed to automatically safeguard your privacy and offer simple ways for you to control how it works with your data,” he wrote in a blog post.
    Guest Mode is now available on Google Nest speakers and displays in English. Google said it plans to bring the feature in more languages and devices in the next few months.
    This latest feature builds on other Assistant privacy features Google introduced just last week at CES 2021, including allowing users to delete a record of the most recent command by saying, “Hey Google, that wasn’t for you”, or asking “Hey Google, are you saving my audio data?” to learn about their privacy controls and go directly into the settings screen to change their preferences.
    Google confirmed in August that third-party workers were “systematically listening” and leaking private Dutch conversations collected by the assistant. 
    Belgian public broadcaster VRT NWS revealed that more than 1,000 files had been leaked from these workers, including recordings from instances where users accidentally triggered Google’s software. After the incident, Google paused all of its language review operations. 
    Google revamped its Assistant privacy policy last year. The changes from last year included Google making it default for the voice assistant to not retain audio recordings once a request was fulfilled, meaning that users have to opt-in to let Google keep any voice recordings made by the device. It also added a feature that allows users to review and delete past, historical audio recordings.
    Related Coverage More