More stories

  • in

    WhatsApp patches vulnerability related to image filter functionality

    Check Point Research has announced the discovery of a vulnerability in  the popular messaging platform WhatsApp that allowed attackers to read sensitive information from WhatsApp’s memory.WhatsApp acknowledged the issue and released a security fix for it in February. The messaging platform — considered the most popular globally with about two billion monthly active users — had an “Out-Of-Bounds read-write vulnerability” related to the platform’s image filter functionality, according to Check Point Research. The researchers noted that exploitation of the vulnerability would have “required complex steps and extensive user interaction.” WhatsApp said there is no evidence that the vulnerability was ever abused.  The vulnerability was triggered “when a user opened an attachment that contained a maliciously crafted image file, then tried to apply a filter, and then sent the image with the filter applied back to the attacker.”Check Point researchers discovered the vulnerability and disclosed it to WhatsApp on November 10, 2020. By February, WhatsApp issued a fix in version 2.21.1.13. that added two new checks on source images and filter images. 

    “Approximately 55 billion messages are sent daily over WhatsApp, with 4.5 billion photos and 1 billion videos shared per day. We focused our research on the way WhatsApp processes and sends images. We started with a few image types such as bmp, ico, gif, jpeg, and png, and used our AFL fuzzing lab at Check Point to generate malformed files,” the report explained. 

    “The AFL fuzzer takes a set of input files and applies various modifications to them in a process called a mutation. This generates a large set of modified files, which are then used as input in a target program. When the tested program crashes or hangs due to these crafted files, this might suggest the discovery of a new bug, possibly a security vulnerability.” From there, the researchers began to “fuzz” WhatsApp libraries and quickly realized that some images could not be sent, forcing the team to find other ways to use the images. They settled on image filters because they require a significant number of computations and were a “promising candidate to cause a crash.”Image filtering involves “reading the image contents, manipulating the pixel values and writing data to a new destination image,” according to the Check Point researchers, who discovered that “switching between various filters on crafted GIF files indeed caused WhatsApp to crash.””After some reverse engineering to review the crashes we got from the fuzzer, we found an interesting crash that we identified as memory corruption. Before we continued our investigation we reported the issue to WhatsApp, which gave us a name for this vulnerability: CVE-2020-1910 Heap-Based out-of-bounds read and write. What’s important about this issue is that given a very unique and complicated set of circumstances, it could have potentially led to the exposure of sensitive information from the WhatsApp application,” the researchers said. “Now that we know we have Heap Based out of bounds read and write according to WhatsApp, we started to dig deeper. We reverse-engineered the libwhatsapp.so library and used a debugger to analyze the root cause of the crash. We found that the vulnerability resides in a native function applyFilterIntoBuffer() in libwhatsapp.so library.”The crash is caused by the fact that WhatsApp assumes both the destination and source images have the same dimensions, and a “maliciously crafted source image” of a certain size can lead to an out-of-bounds memory access, causing a crash. The fix for the vulnerability now validates that the image format equals 1, meaning both the source and filter images have to be in RGBA format. The new fix also validates the image size by checking the dimensions of the image. In a statement, WhatsApp said they appreciated Check Point’s work but noted that no one should worry about the platform’s end-to-end encryption. “This report involves multiple steps a user would have needed to take and we have no reason to believe users would have been impacted by this bug. That said, even the most complex scenarios researchers identify can help increase security for users,” WhatsApp explained. “As with any tech product, we recommend that users keep their apps and operating systems up to date, to download updates whenever they’re available, to report suspicious messages, and to reach out to us if they experience issues using WhatsApp.” Facebook, which owns WhatsApp, announced in September 2020 that it would launch a website dedicated to listing all the vulnerabilities that have been identified and patched for the instant messaging service.WhatsApp previously released a fix for a vulnerability related to a bug in the Voice over IP (VoIP) calling feature of the app on both iOS and Android. Burak Agca, an engineer at cloud security company Lookout, told ZDNet that concern about WhatsApp comes from the well publicized capabilities of spyware created by NSO Group and originally discovered by Lookout and The Citizen Lab.”We have seen multiple variants of the same attack. We have observed that such attacks typically execute an exploit chain taking advantage of multiple vulnerabilities across the app and the operating system in tandem. For example, the first such discovered chain exploited a vulnerability (since patched) in the Safari browser to break out of the application sandbox, following which multiple operating system vulnerabilities (also, since patched) were exploited to elevate privileges and install spyware without the user’s knowledge,” Agca said. “The WhatsApp exploit seems to exhibit a similar behavior, and the end-to-end details of these types of exploits come under scrutiny by the security community. For individuals and enterprises, it is clear relying on WhatsApp saying its messaging is encrypted end to end is simply not enough to keep sensitive data safe.” More

  • in

    Australians could soon attach a PDF or hyperlink to a payment

    Image: Getty Images
    Australian banking customers may soon be able to attach a PDF or a hyperlink to a payment, with the Reserve Bank of Australia (RBA) signalling work is underway for such a feature.”The Australian banking industry is in the process of developing a service to enable large payers, such as corporate or government entities, to send an electronic payment that includes a secure hyperlink to a PDF document,” it said.”The provision of this functionality is part of a growing trend globally to improve efficiency and lower the costs of payment services by improving the amount and quality of data that are able to be transferred together with a payment.”The remarks were made in a submission [PDF] the RBA presented to a Senate committee probing the adequacy and efficacy of Australia’s anti-money laundering and counter-terrorism financing (AML/CTF) regime.It raised the payment feature while discussing the potential to whitelist some trusted payer entities. “Some of these initiatives, such as the adoption of the ISO 20022 standard for payment message formats, provide for structured remittance data that make it easier and faster for systems to screen for financial crimes compliance,” the RBA said.Explaining how the feature works, the RBA said organisational payers that sign up to the “payment with document” service could optionally include a link to a document when sending a payment. A payee recipient would view the payment in their online banking channel and be able to click on that link, which RBA argued would provide secure, authenticated access to the PDF document associated with that particular payment.

    It said the documents would be held by accredited document host providers.”For example, if an Australian government agency utilised this service, myGov could store the documents and an authenticated link would provide direct access to the document on myGov,” the RBA said. According to the RBA, the payment with document service would provide benefits to both payers and payees.”It would be of particular value to a payer that sends a high volume of correspondence to recipients detailing information about their payments. There is often a lag between payment receipt and receipt of the related correspondence — this generates a high number of calls to customer call centres, which is expensive and inefficient to manage,” the RBA said.”A ‘payment with document’ service would also be valuable to payees, who could quickly and easily access correspondence directly from their banking app or online banking service to understand what the payment was for.”The Reserve Bank said providing detailed information with the financial transaction through electronic banking channels makes this information available for consideration by financial institutions in meeting their AML/CTF regulatory obligations. But, they will need to be able to screen any documents linked to a payment in order to meet these obligations, and the RBA said this solution does not appeal to all players.A possible solution, it said, would be to maintain a whitelist of trusted payer entities that provides relief to financial institutions from the requirement to screen documents linked to payments from those trusted entities. “Our understanding is financial institutions would welcome such a whitelist, as the cost and effort involved in screening all linked payment documents would be challenging,” the RBA wrote.This would mean correspondence sent with a payment by trusted entities, such as an Australian government agency, would remain private to the payment recipient, and banks would be permitted to access the document only in order to provide technical support services to the account-holder and only with the accountholder’s consent.A formal governance framework would need to be developed for this to occur, and the RBA proposed Austrac could be the authority over the scheme.The Legal and Constitutional Affairs References Committee kicked off the inquiry in June, which, among other things, seeks to determine the effectiveness of the Anti-Money Laundering and Counter-Terrorism Financing Act 2006 to prevent money laundering outside the banking sector, the attractiveness of Australia as a destination for proceeds of foreign crime and corruption, and Austrac’s role in policing such activity.RELATED COVERAGE More

  • in

    Twitter creates 'Safety Mode' to temporarily block accounts caught insulting users

    Twitter is rolling out a new feature called Safety Mode that temporarily blocks certain accounts for seven days if they are found insulting users or repeatedly sending hateful remarks.The feature will only be available to a small group of English-language users on iOS, Android and Twitter.com, the company explained in a blog post on Wednesday. Users will also be blocked if they are sending “repetitive and uninvited replies or mentions,” according to Twitter senior product manager Jarrod Doherty. “When the feature is turned on in your Settings, our systems will assess the likelihood of a negative engagement by considering both the Tweet’s content and the relationship between the Tweet author and replier,” Doherty said. “Our technology takes existing relationships into account, so accounts you follow or frequently interact with will not be autoblocked. Authors of Tweets found by our technology to be harmful or uninvited will be autoblocked, meaning they’ll temporarily be unable to follow your account, see your Tweets, or send you Direct Messages.”A screenshot of what Safety Mode will look like. 
    Twitter

    Read more about Twitter

    Doherty added that unwelcome Tweets have gotten in the way of the kinds of conversations Twitter wants its users to continue having, prompting the creation of the Safety Mode tool and other features added in recent years to protect people. Users can learn more about the Tweets and accounts that were flagged by Safety Mode and will receive a notification once the Safety Mode ban period is about to end. Twitter will also send a recap of the situation before the period ends. 

    “We won’t always get this right and may make mistakes, so Safety Mode autoblocks can be seen and undone at any time in your Settings. We’ll also regularly monitor the accuracy of our Safety Mode systems to make improvements to our detection capabilities,” Doherty explained. “We want you to enjoy healthy conversations, so this test is one way we’re limiting overwhelming and unwelcome interactions that can interrupt those conversations. Our goal is to better protect the individual on the receiving end of Tweets by reducing the prevalence and visibility of harmful remarks.” In recent years, Twitter has worked with human rights groups and mental health organizations to get feedback about their platform and changes that need to be made to better protect users from discrimination, racism, sexism and other issues that have become rampant on the site. 

    Read this

    Twitter files for IPO: By the numbers

    Twitter’s initial public offering seeks to raise over $1 billion as it aims to go public on the U.S. stock exchange. Here’s the filing broken down into number-by-number morsels.

    Read More

    Twitter also created a Trust and Safety Council that they said pushed for certain changes to Safety Mode that would make it less likely to be manipulated. The council also nominated certain Twitter accounts to join the inaugural group of users that will have access to Safety Mode, with a particular emphasis being put on providing the tool to people from marginalized communities and female journalists.Digital human rights group Article 19 — which is a member of the Trust and Safety Council — said it provided feedback on Safety Mode “to ensure it entails mitigations that protect counter-speech while also addressing online harassment towards women and journalists.””Safety Mode is another step in the right direction towards making Twitter a safe place to participate in the public conversation without fear of abuse,” Article 19 said in a statement.Doherty noted that Twitter has taken part in other discussions about ways women can customize their experience on the site through tools like Safety Mode and others. Twitter will see how the tool is used and make adjustments as it rolls it out the larger Twitter user base.The site has been making changes in recent months to cut down on the disinformation and abuse that have caused outrage among users for many years. In August, the site announced that it was conducting a test that would allow users in the US, South Korea and Australia to report misleading tweets, which have gained prominence during the COVID-19 pandemic and subsequent vaccine rollout. More

  • in

    Apple adds driver's licenses, state IDs to Apple Wallet

    ×apple-wallet-id.pngApple is working with eight states to bring state IDs and driver’s licenses to Apple Wallet in a move that could make airport check-ins easier. The company said that Arizona, Connecticut, Georgia, Iowa, Kentucky, Maryland, Oklahoma and Utah will be bringing their IDs to Apple Wallet for display on the iPhone and Apple Watch. Apple is pushing Apple Wallet to be an ID repository with plans to add student IDs, corporate badges, hotel keys and other items. Arizona and Georgia will be the first states to enable residents to add their driver’s license or state ID to Apple Wallet. The Transportation Security Administration (TSA) will enable some airport security checkpoints and lanes to allow customers to use Apple Wallet for their ID and pass through with a phone tap. Specific dates about the Apple Wallet rollout for IDs will be shared by TSA and participating states. The process goes like this:A consumer would add an ID or license to Apple Wallet as they would a credit card or transit pass. If paired with an Apple Watch, the consumer could add the ID to the Apple Watch. The consumer would be asked to use their iPhone to scan their physical driver’s license or state ID card and take a selfie that would be provided to the state for verification. For additional security, customers will be prompted to complete a series of facial and head movements. The issuing state would then verify the IDs to be added to Apple Wallet. Once added, the TSA will be able to accept IDs with a tap at the identity reader. Using Face ID or Touch ID, the identity information being asked for is shared.Apple also said there are privacy features including:Apple and the issuing states don’t know when or where the IDs are presented. ID data is encrypted and protected with biometric authentication. ID information is presented through encrypted communication between the device and identity reader. Find My app can lock, locate and erase misplaced devices.  More

  • in

    Half of businesses can't spot these signs of insider cybersecurity threats

    Most businesses are struggling to identify and detect early indicators that could suggest an insider is plotting to steal data or carry out other cyberattacks. Research by security think tank the Ponemon Institute and cybersecurity company DTEX Systems suggests that over half of companies find it impossible or very difficult to prevent insider attacks. These businesses are missing indicators that something might be wrong. Those include unusual amounts of files being opened, attempts to use USB devices, staff purposefully circumventing security controls, masking their online activities, or moving and saving files to unusual locations. All these and more might suggest that a user is planning malicious activity, including the theft of company data. SEE: A winning strategy for cybersecurity (ZDNet special report) Insider threats can come in a number of forms, ranging from employees who plan to take confidential data when they leave for another job, to those who are actively working with cyber criminals, potentially even to lay the foundations for a ransomware attack. In many cases, an insider preparing to carry out an attack will follow a set pattern of activities including reconnaissance, circumvention, aggregation, obfuscation and exfiltration, all of which could suggest something is amiss. But businesses are struggling to detect the indicators of insider threat in each of these stages because of a lack of effective monitoring controls and practices. 

    “The vast majority of security threats follow a pattern or sequence of activity leading up to an attack, and insider threats are no exception,” said Larry Ponemon, chairman and founder of the Ponemon Institute. Many security professionals are already familiar with Lockheed Martin’s Cyber Kill Chain and the MITRE ATT&CK Framework, both of which describe the various stages of an attack and the tactics utilized by an external adversary, he said. But since human behavior is more nuanced than machine behavior, insider attacks follow a slightly different path and, therefore, require modern approaches to combat.Just a third of of businesses believe they’re effective at preventing data from being leaked from the organisation.According to the research, one of the key reasons insider threats aren’t being detected is because of confusion around who is responsible for controlling and mitigating risks. While 15% of those surveyed suggested that the CIO, CISO or head of the business is responsible, 15% suggested that nobody has ultimate responsibility in this space – meaning that managing and detecting the risks and threats can fall between the cracks. There are several factors that make detecting cybersecurity risks – including insider threats – difficult. Over half of businesses cite lack of in-house expertise in dealing with threats, while just under half say there’s a lack of budget, and the shift to remote working has also made it harder to mitigate cybersecurity risks. SEE: Ransomware: This new free tool lets you test if your cybersecurity is strong enough to stop an attackAccording to Ponemon and DTEX, the best way for companies to improve their ability to detect insider threats is to improve the security posture of the business, as well as designating a clear authority for controlling and mitigating this risk – one that can investigate activities that could suggest a potential insider attack. “Our findings indicate that in order to fully understand any insider incident, visibility into the nuance and sequence of human behavior is pivotal,” said Rajan Koo, chief customer officer at DTEX Systems. “Organisations need to take a human approach to understanding and detecting insider threats, as human elements are at the heart of these risks,” he added. MORE ON CYBERSECURITY More

  • in

    This is why the Mozi botnet will linger on

    It has been two years since the emergency of Mozi, and despite the arrest of its alleged author, the botnet continues to spread. 

    Mozi was discovered in 2019 by 360 Netlab, and in the two years since, has grown from a small operation to a botnet that “accounted for an extremely high percentage of [Internet of Things] IoT traffic at its peak.” According to Netlab (translated), Mozi has accounted for over 1.5 million infected nodes, of which the majority — 830,000 — originate from China.  Mozi is a P2P botnet that uses the DHT protocol. In order to spread, the botnet abuses weak Telnet passwords and known exploits to target networking devices, IoT, and video recorders, among other internet-connected products.  The botnet is able to enslave devices to launch Distributed Denial-of-Service (DDoS) attacks, launch payloads, steal data, and execute system commands. If routers are infected, this could lead to Man-in-The-Middle (MITM) attacks. Earlier this month, Microsoft IoT security researchers said that Mozi has evolved to “achieve persistence on network gateways manufactured by Netgear, Huawei, and ZTE” by adapting its persistence mechanisms depending on each device’s architecture. In July, Netlab claimed that the cybersecurity firm had assisted law enforcement to arrest the alleged developer of Mozi, and therefore, “we don’t think it will continue to be updated for quite some time to come.” 

    However, the botnet lives on, and on Tuesday, the company has provided its opinion on why.  “We know that Mozi uses a P2P network structure, and one of the “advantages” of a P2P network is that it is robust, so even if some of the nodes go down, the whole network will carry on, and the remaining nodes will still infect other vulnerable devices,” Netlab says. “That is why we can still see Mozi spreading.” According to the team, alongside the main Mozi_ftp protocol, the discovery of malware using the same P2P setup — Mozi_ssh — suggests that the botnet is also being used to cash in on illegal cryptocurrency mining. In addition, users are harnessing Mozi’s DHT configuration module and creating new, functional nodes for it, which the team says allows them to “quickly develop the programs needed for new functional nodes, which is very convenient.” “This convenience is one of the reasons for the rapid expansion of the Mozi botnet,” Netlab added.  The team also said that in a sample of the botnet dubbed v2s, captured last year, suggests that updates to Mozi have been focused on separating control nodes from “mozi_bot” nodes, as well as improving efficiency. It may be that these changes were made by the authors to lease the network to other threat actors. “The Mozi botnet samples have stopped updating for quite some time, but this does not mean that the threat posed by Mozi has ended,” the researchers say. “Since the parts of the network that are already spread across the internet have the ability to continue to be infected, new devices are infected every day.” Netlab predicts that that week-by-week, the size of the botnet will gradually decrease, but it is likely that the impact of Mozi will be felt for some time to come.  Previous and related coverage Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Cream Finance platform pilfered for over $34 million in cryptocurrency

    Cream Finance has lost over $34 million in cryptocurrency after a cyberattacker exploited a vulnerability in the project’s market system. 

    The decentralized finance (DeFi) organization is the developer of a lending protocol for individuals, with yields on offer for some cryptocurrency stakes. Assets on the platform include Ethereum (ETH), the AMP token, CREAM token, USDT, and COMP.Cream said an attacker managed to exploit a vulnerability on August 31, leading to the theft of 462,079,976 in AMP ($24.2m) tokens and 2,804.96 ETH tokens ($9.9m), according to an update posted on September 1.At current prices, this amounts to over $34 million.  In an analysis of the attack, with the assistance of PeckShield, Cream said an error in how the platform integrated AMP, leading to a reentrancy bug, was the source of the exploit.  “While unfortunate and disappointing, we take ownership of the error,” the developers say.  Cream is now working with law enforcement to try and trace the attacker — or, attackers, as the platform says a “copycat” was also in play at the time of the main attack. The second individual has a transaction history with Binance.

    The organization has paused AMP supply and borrow functions until a patch can be deployed. The stolen ETH and AMP will be replaced, with 20% of protocol fees now earmarked to repay customers.  Cream says that if the attacker is willing to return the stolen cryptocurrency, they can keep 10%, without any consequences as a form of bug bounty payment. However, if others are able to provide a lead on the identity of the cyberattacker leading to their arrest and/or prosecution, 50% of the value of the stolen funds is on offer. as a reward  If neither offer is successful, “we will forward all relevant information to law enforcement authorities and prosecute to the fullest extent of the law,” the company says.This is not the first time Cream has fallen foul of a cyberattack. In February, the platform lost $37.5 million due to a flash loan exploit made via IronBank.  Earlier this month, DeFi platform Poly Network said an attacker exploited a vulnerability in the platform to siphon away roughly $610 million in cryptocurrency, including BSC and ETH. The thief has since returned the funds and is signed off as “Mr. White Hat” in Poly blog posts.  The company has returned assets to its rightful owners and is currently in the process of restoring cross-chain services.  Previous and related coverage Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Scam artists are recruiting English speakers for business email campaigns

    Native English speakers are being recruited in their droves by criminals trying to make Business Email Compromise (BEC) more effective. 

    BEC schemes can be simple to execute and among the most potentially devastating for a business, alongside threats such as ransomware.  A BEC scam will usually start with a phishing email, tailored and customized depending on the victim. Social engineering and email address spoofing may also be used to make the message appear to originate from someone in the target company — such as an executive, the CEO, or a member of an accounts team — in order to fool an employee into making a payment to an account controlled by a criminal.  In some cases, these payments — intended to pay an alleged invoice, for example — can reach millions of dollars. In 2020, US companies alone lost roughly $1.8 billion to these forms of cyberattack.  Little technical knowledge is required to pull off a BEC scam, however, threat actors need to be able to communicate effectively in order to succeed in these endeavors — and if they are not fluent in the language a target speaks, this can cause BEC attacks to ultimately fail.  Unfortunately, there are ways to plug this gap in expertise: recruit a native language speaker from the underground.  According to Intel 471, forums are now being used to seek out English speakers, in particular, to bring together teams able to manage both the technical aspects and social engineering elements of a BEC scam. 

    Over the course of 2021, threat actors have posted ‘wanted’ adverts on a popular Russian-speaking cybercriminal forum asking for native English speakers, later tasked with managing email communication that would not raise red flags to members of a high-level organization, as well as to manage the negotiation aspect of a BEC operation. If a scam is to succeed, the target employee must believe communication comes from a legitimate source — and secondary language use, spelling mistakes, and grammatical issues could all be indicators that something isn’t right, in the same way that run-of-the-mill spam often contains issues that alert recipients to attempted fraud.  “Actors like those we witnessed are searching for native English speakers since North American and European markets are the primary targets of such scams,” the researchers say.In addition, threat actors are also trying to recruit launderers to clean up the proceeds from BEC schemes, often achieved through cryptocurrency mixer and tumbler platforms. One advert spotted by the team asked for a service able to launder up to $250,000.  “The BEC footprint on underground forums is not as large as other types of cybercrime, likely since many of the operational elements of BEC use targeted social engineering tactics and fraudulent domains, which do not typically require technical services or products that the underground offers,” Intel 471 says. “[…] Criminals will use the underground for all types of schemes, as long as those forums remain a hotbed of skills that can make criminals money.” Previous and related coverage Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More