More stories

  • in

    Hackers exploit websites to give them excellent SEO before deploying malware

    Cyberattackers have turned to search engine optimization (SEO) techniques to deploy malware payloads to as many victims as possible. 

    ZDNet Recommends

    According to Sophos, the so-called search engine “deoptimization” method includes both SEO tricks and the abuse of human psychology to push websites that have been compromised up Google’s rankings. 
    SEO optimization is used by webmasters to legitimately increase their website’s exposure on search engines such as Google or Bing. However, Sophos says that threat actors are now tampering with the content management systems (CMS) of websites to serve financial malware, exploit tools, and ransomware. 
    In a blog post on Monday, the cybersecurity team said the technique, dubbed “Gootloader,” involves deployment of the infection framework for the Gootkit Remote Access Trojan (RAT) which also delivers a variety of other malware payloads. 
    The use of SEO as a technique to deploy Gootkit RAT is not a small operation. The researchers estimate that a network of servers — 400, if not more — must be maintained at any given time for success. 
    While it isn’t known if a particular exploit is used to compromise these domains in the first place, the researchers say that CMSs running the backend of websites could have been hijacked via malware, stolen credentials, or brute-force attacks. 

    Once the threat actors have obtained access, a few lines of code are inserted into the body of website content. Checks are performed to ascertain whether the victim is of interest as a target — such as based on their IP and location — and queries originating from Google search are most commonly accepted. 

    Websites compromised by Gootloader are manipulated to answer specific search queries. Fake message boards are a constant theme in hacked websites observed by Sophos, in which “subtle” modifications are made to “rewrite how the contents of the website are presented to certain visitors.”
    “If the right conditions are met (and there have been no previous visits to the website from the visitor’s IP address), the malicious code running server-side redraws the page to give the visitor the appearance that they have stumbled into a message board or blog comments area in which people are discussing precisely the same topic,” Sophos says.
    If the attackers’ criteria aren’t met, the browser will display a seemingly-normal web page — that eventually dissolves into garbage text. 
    A fake forum post will then be displayed containing an apparent answer to the query, as well as a direct download link. In one example discussed by the team, the website of a legitimate neonatal clinic was compromised to show fake answers to questions relating to real estate. 

    Victims who click on the direct download links will receive a .zip archive file, named in relation to the search term, that contains a .js file. 
    The .js file executes, runs in memory, and obfuscated code is then decrypted to call other payloads. 
    According to Sophos, the technique is being used to spread the Gootkit banking Trojan, Kronos, Cobalt Strike, and REvil ransomware, among other malware variants, in South Korea, Germany, France, and the United States. 
    “At several points, it’s possible for end-users to avoid the infection, if they recognize the signs,” the researchers say. “The problem is that, even trained people can easily be fooled by the chain of social engineering tricks Gootloader’s creators use. Script blockers like NoScript for Firefox could help a cautious web surfer remain safe by preventing the initial replacement of the hacked web page to happen, but not everyone uses those tools.”

    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Tether faces 500 Bitcoin ransom: We are ‘not paying’

    Tether has revealed a ransomware demand in which threat actors are allegedly demanding 500 Bitcoin ($24 million). 

    Over the weekend, the blockchain and cryptocurrency organization said on Twitter that a demand for payment had been made, on pain of documents being leaked online that would “harm the Bitcoin ecosystem.” 
    The wallet address associated with the demand, at the time of writing, has $72 in BTC stored. 
    Tether said that the payment deadline is March 1, but added, “We are not paying.”
    “It is unclear whether this is a basic extortion scheme like those directed at other crypto companies or people looking to undermine Tether and the crypto community as a whole,” Tether says. “Either way, those seeking to harm Tether are getting increasingly desperate.”
    The company also used the same thread to claim that documents circling online, allegedly showing dubious communication between employees of Tether, Deltec Bank & Trust, and other parties, are “forged”.  
    The unverified email screenshots appear to relate to Bahamas-based Deltec, which has a banking relationship with Tether, and a discussion over asset backing. Tether says the documents are “bogus.”

    In a separate tweet, Tether and Bitfinex CTO Paolo Ardoino said the main goal of these alleged leaks “is to discredit #bitcoin and all #crypto.”
    “While we believe this is a pretty sad attempt at a shakedown, we take it seriously,” Tether commented. “We have reported the forged communications and the associated ransom demand to law enforcement. As always, we will fully support law enforcement in an investigation of this extortion scheme.”
    Update 14.37 GMT: Tether told ZDNet that the company does not know the identity of the individual making the ransom demand and is “not in a position” to provide a copy of the ransom note “at this time.”
    In other Tether news, the organization has reached an $18.5 million settlement with the New York Attorney General’s Office to settle a case in which both Tether and Bitfinex were accused of covering up an $850 million loss.
    Letitia James, NY attorney-general, accused the firms of “recklessly and unlawfully covered up massive financial losses to keep their scheme going and protect their bottom lines,” adding that “Tether’s claims that its virtual currency was fully backed by US dollars at all times was a lie.”
    Tether admitted no wrongdoing but has agreed to settle, a gesture the firm says “should be viewed as a measure of our desire to put this matter behind us and focus on our business.”
    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Judge approves $650m settlement for Facebook users in privacy, biometrics lawsuit

    A $650 million settlement to close a class-action lawsuit alleging that Facebook violated user privacy has been approved. 

    The case, a class-action lawsuit filed against the social media giant six years ago, alleged that Facebook violated the Illinois Biometric Information Privacy Act (BIPA), which prevents companies from gathering or using biometric information from users without consent. 
    The lawsuit claimed that the Facebook Tag Suggestions feature, which used facial markers to suggest people in image tagging, violated BIPA by scanning, storing, and using user biometrics to create “face templates” without written permission.
    On Friday, in California, US District Judge James Donato approved the $650 million settlement, an increase of $100 million from Facebook’s proposed $550 million in January 2020. 
    The ruling has been described as a “landmark result.” 
    In total, close to 1.6 million Facebook users in Illinois could receive as much as $345 each within months, on the assumption that no appeal is filed, as reported by the Chicago Tribune. 
    However, only users that signed up for representation in the class-action suit before the November 23, 2020 deadline are eligible for compensation. 

    The three plaintiffs who originally filed the suit will receive $5,000 each. 
    “Overall, the settlement is a major win for consumers in the hotly contested area of digital privacy,” the order read. “Final approval of the class action settlement is granted. Attorneys’ fees and costs, and incentive awards to the named plaintiffs, are also granted.”
    In a statement, Facebook said, “we are pleased to have reached a settlement so we can move past this matter, which is in the best interest of our community and our shareholders.”
    In related news over the past week, video content-sharing platform TikTok has agreed to a $92 million settlement to resolve claims that the company harvested and shared data belonging to minors. 
    The case, originating from 21 class-action lawsuits filed in California and Illinois, also included allegations of BIPA violations. 
    TikTok has agreed to the settlement — despite denying any wrongdoing — in order to focus on “building a safe and joyful experience for the TikTok community.”
    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    TikTok removed 89M videos, most of which from US

    TikTok has released its latest transparency report, revealing that more than 89.13 million videos were removed from its platform in the second half of 2020. The majority of these, at 11.78 million, are from the United States and 83.3% were yanked before they clocked any views.
    The videos, which accounted for under 1% of all videos uploaded on TikTok, were removed for violating various conditions detailed in the Chinese tech company’s community guidelines or terms of service. These included safety involving minors, violent and graphic content, illegal activities and regulated goods, and suicide and dangerous activities, according to its latest and fourth transparency report.
    Some 92.4% of videos were removed before users reported them and 93.5% within 24 hours of being posted. More than 6.14 million accounts were shuttered, while almost 9.5 million spam accounts were removed along with 5.23 million spam videos posted by these accounts. Some 173.25 million accounts were stopped from being created through automated means. 
    In addition, more than 3.5 million ads also were rejected for violating the company’s advertising policies and guidelines, said TikTok, which noted that it did not accept paid political ads. 
    Apart from the US, some 8.22 million videos removed originated in Pakistan while 7.51 million were from Brazil and 4.75 million were from Russia. Indonesia rounded up the top five countries, accounting for 3.86 million videos that were removed worldwide. 
    Amongst government agencies that submitted requests to restrict or remove content on the video platform, Russia led the pack with 135 such requests, followed by Pakistan at 97, and Australia at 32. 
    Owned by ByteDance, TikTok also operated a COVID-19 information hub, which it said clocked some 2.63 billion views in the second half of last year. Public service announcements directing users to the World Health Organisation and local public health resources were viewed more than 38.01 billion times. 

    TikTok added that it removed 51,505 videos for promoting COVID-19 misinformation, 86% of which were yanked out before users reported them and 87% within 24 hours of being uploaded on the platform. Some 71% did not clock any views before they were removed. 
    In the first half of 2020, the video platform removed more than 104.54 million videos, with India and the US contributing the most of such content at 37.68 million and 9.82 million, respectively. 
    RELATED COVERAGE More

  • in

    Minion privilege escalation exploit patched in SaltStack Salt project

    The Salt Project has patched a privilege escalation bug impacting SaltStack Salt minions that could be used during a wider exploit chain. 

    The vulnerability, CVE-2020-28243, is described as a privilege escalation bug impacting SaltStack Salt minions allowing “an unprivileged user to create files in any non-blacklisted directory via a command injection in a process name.” 
    The bug has been given a severity rating of 7.0 and impacts Salt versions before 3002.5.
    SaltStack’s Salt is an open source project and software designed for automation and infrastructure management. 
    In November, Immersive Labs’ security researcher Matthew Rollings performed a scan on the tool using Bandit, a Python application security scanner, and came across the bug as a result. 
    Salt includes a master system and minions, of which the latter facilitates commands sent to the master, and both often run as root. Rollings discovered a command injection vulnerability in minions when the master system summons a process called restartcheck. Exploits can be triggered if attackers use crafted process names, permitting local users to escalate their privileges on root — as long as they are able to create files on a minion in a non-forbidden directory. 
    With further investigation, the researcher noted it may also be possible to perform container escapes, including performing the exploit “within a container to gain command execution as root on the host machine.”

    In addition, Rollings said the vulnerability “may be performed by an attacker without local shell access, [and] under certain circumstances, remote users can influence process names.” However, this form of attack is considered “unlikely” and could be difficult to trigger. 
    The Salt Project resolved the vulnerability in a February security release. The group also patched other high-impact bugs including CVE-2021-3197, a shell injection flaw in Salt-API’s SSH client; CVE-2021-25281, an eAuth security issue that could allow remote attackers to run any wheel modules on the master, and CVE-2021-25283, a failure to protect against server-side template injection attacks. 
    ZDNet has reached out to the Salt Project and will update when we hear back. 
    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Businessman charged with intent to steal General Electric’s secret silicon technology

    A Chinese businessman has been charged with intent to steal General Electric’s (GE) processor technology. 

    On Friday, the US Department of Justice (DoJ) said that Chi Lung Winsman Ng, a 64-year-old resident of Hong Kong, allegedly plotted to steal MOSFET intellectual property with the overall goal of developing a business — and rival — based on GE’s technology. 
    According to the DoJ indictment, between roughly March 2017 and January 2018, Ng teamed up with a co-conspirator, a former GE engineer, to hash out a plan to steal the company’s proprietary data. 
    General Electric’s silicon carbide metal-oxide semiconductor field-effect transistors (MOSFETs) are semiconductor designs that the company has been working on for more than a decade. GE’s chips are used in a variety of products and have landed the firm contracts in both the automotive and military space. 
    Assistant Attorney General John Demers of the DoJ’s National Security Division said that Ng and co-conspirators “chose to steal what they lacked the time, talent or money to create.”
    The DoJ claims that the pair went so far as to create PowerPoint presentations to impress investors with their start-up’s business plan and told interested parties that the new company could be profitable within three years. 
    Ng and the engineer allegedly claimed that the start-up owned assets worth $100 million — including intellectual property — and sought to secure $30 million in funding. At least one meeting took place with a Chinese investor, according to US prosecutors. 

    “We have no evidence that there was an illegal MOSFET technology transfer to any Chinese companies, including the company that Ng and his co-conspirator were trying to start,” the DoJ added. 
    Ng has been charged with conspiracy to steal trade secrets. If arrested and found guilty, the businessman would face up to 10 years behind bars and punitive damages of up to $250,000. 
    “According to the indictment, Mr. Ng conspired to steal valuable and sensitive technology from GE and produce it in China,” commented Special Agent in Charge Thomas Relford of the FBI’s Albany Field Office. “Our office, the US Attorney’s Office, and GE coordinated closely and worked quickly to prevent that theft and the resulting damage to our economic security.”
    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Why would you ever trust Amazon's Alexa after this?

    Skillful, but not necessarily trustworthy?
    Amazon
    It was only the other day that I was wondering whether it would be fun to have a cuckoo clock in my kitchen.

    more Technically Incorrect

    An Amazon Alexa-powered cuckoo clock, that is.
    I concluded that the idea was arrant bonkers, as are most things Alexa-enabled.
    But we all have our prejudices and many Americans are only too delighted to have Amazon’s Echos and Dots strewn about their homes to make their lives easier.
    Why, Alexa can even buy you your mummy, should you want.
    Yet perhaps Alexa-lovers should be warned that things may not be as delightful as they seem.
    Skills? Oh, Everyone’s Got Skills.
    New research from concerned academics at Germany’s Ruhr-University Bochum, together with equally concerned colleagues from North Carolina State — and even a researcher who, during the project, joined Google — may just make Alexa owners wonder about the true meaning of an easy life.

    The researchers looked at 90,194 Alexa skills. What they found was a security Emmenthal that would make a mouse wonder whether there was any cheese there at all.
    How much would you like to shudder, oh happy Alexa owner?
    How about this sentence from Dr. Martin Degeling: “A first problem is that Amazon has partially activated skills automatically since 2017. Previously, users had to agree to the use of each skill. Now they hardly have an overview of where the answer Alexa gives them comes from and who programmed it in the first place.”
    So the first problem is that you have no idea where your clever answer comes from whenever you rouse Alexa from her slumber. Or, indeed, how secure your question may have been.
    Ready for another quote from the researchers? Here you go: “When a skill is published in the skill store, it also displays the developer’s name. We found that developers can register themselves with any company name when creating their developer’s account with Amazon. This makes it easy for an attacker to impersonate any well-known manufacturer or service provider.”
    Please, this is the sort of thing that makes us laugh when big companies get hacked — and don’t tell us for months, or even years.
    These researchers actually tested the process for themselves. “In an experiment, we were able to publish skills in the name of a large company. Valuable information from users can be tapped here,” they said, modestly.
    This finding was bracing, too. Yes, Amazon has a certification process for these skills. But “no restriction is imposed on changing the backend code, which can change anytime after the certification process.”
    In essence, then, a malicious developer could change the code and begin to hoover up sensitive personal data.

    Security? Yeah, It’s A Priority.
    Then, say the researchers, there are the skills developers who publish under a false identity.
    Perhaps, though, this all sounds too dramatic. Surely all these skills have privacy policies that govern what they can and can’t do.
    Please sit down. From the research: “Only 24.2% of skills have a privacy policy.” So three-quarters of the skills, well, don’t.
    Don’t worry, though, there’s worse: “For certain categories like ‘kids’ and ‘health and fitness’ only 13.6% and 42.2% skills have a privacy policy, respectively. As privacy advocates, we feel both ‘kids’ and ‘health’ related skills should be held to higher standards with respect to data privacy.”
    Naturally, I asked Amazon what it thought of these slightly chilly findings.
    An Amazon spokesperson told me: “The security of our devices and services is a top priority. We conduct security reviews as part of skill certification and have systems in place to continually monitor live skills for potentially malicious behavior. Any offending skills we identify are blocked during certification or quickly deactivated. We are constantly improving these mechanisms to further protect our customers.”
    It’s heartening to know security is a top priority. I fancy getting customers to be amused by as many Alexa skills as possible so that Amazon can collect as much data as possible, might be a higher priority.
    Still, the spokesperson added: “We appreciate the work of independent researchers who help bring potential issues to our attention.”
    Some might translate this as: “Darn it, they’re right. But how do you expect us to monitor all these little skills? We’re too busy thinking big.”
    Hey, Alexa. Does Anyone Really Care?
    Of course, Amazon believes its monitoring systems work well in identifying true miscreants. Somehow, though, expecting developers to stick to the rules isn’t quite the same as making sure they do.
    I also understand that the company believes kid skills often don’t come attached to a privacy policy because they don’t collect personal information.
    To which one or two parents might mutter: “Uh-huh?”
    Ultimately, like so many tech companies, Amazon would prefer you to monitor — and change — your own permissions, as that would be very cost-effective for Amazon. But who really has those monitoring skills?
    This research, presented last Thursday at the Network and Distributed System Security Symposium, makes for such candidly brutal reading that at least one or two Alexa users might consider what they’ve been doing. And with whom.
    Then again, does the majority really care? Until some unpleasant happenstance occurs, most users just want to have an easy life, amusing themselves by talking to a machine when they could quite easily turn off the lights themselves.
    After all, this isn’t even the first time that researchers have exposed the vulnerabilities of Alexa skills. Last year, academics tried to upload 234 policy-breaking Alexa skills. Tell me how many got approved, Alexa? Yes, all of them.
    The latest skills researchers themselves contacted Amazon to offer some sort of “Hey, look at this.”
    They say: “Amazon has confirmed some of the problems to the research team and says it is working on countermeasures.”
    I wonder what skills Amazon is using to achieve that. More

  • in

    Chrome will soon try HTTPS first when you type an incomplete URL

    Image: Google
    Google engineers have been some of the most ardent promoters of browser security features over the past few years and, together with the teams behind the Firefox and Tor browsers, have often been behind many of the changes that have shaped browsers into what they are today.

    From pioneering features like Site Isolation and working behind the scenes at the CA/B Forum to improve the state of the TLS certificate business, we all owe a great deal of gratitude to the Chrome team.
    But one of the biggest areas of interest for Chrome engineers over the past few years has been in pushing and promoting the use of HTTPS, both inside their browser, but also among website owners.
    As part of these efforts, Chrome now tries to upgrade sites from HTTP to HTTPS when HTTPS is available.
    Chrome also warns users when they’re about to enter passwords or payment card data on unsecured HTTP pages, from where they might be sent across a network in plaintext.
    And Chrome also blocks downloads from HTTP sources if the page URL is HTTPS —to avoid users getting tricked into thinking their download is secured but actually not.
    Changes to the Chrome Omnibox arriving in v90
    But even if around 82% of all internet sites run on HTTPS, these efforts are far from done. The latest of these HTTPS-first changes will arrive in Chrome 90, scheduled to be released in mid-April, this year.

    The change will impact the Chrome Omnibox —the name Google uses to describe the Chrome address (URL) bar.
    In current versions, when users type a link in the Omnibox, Chrome will load the typed link, regardless of protocol. But if users forget to type the protocol, Chrome will add “http://” in front of the text and attempt to load the domain via HTTP.
    For example, typing something like “domain.com” in current Chrome installs loads “http://domain.com.”
    This will change in Chrome 90, according to Chrome security engineer Emily Stark. Starting with v90, the Omnibox will load all domains where the domain was left out via HTTPS, with an “https://” prefix instead.
    “Currently, the plan is to run as an experiment for a small percentage of users in Chrome 89, and launch fully in Chrome 90, if all goes according to plan,” Stark explained on Twitter this week.
    Users who’d like to test the new mechanism can do so already in Chrome Canary. They can visit the following Chrome flag and enable the feature:
    chrome://flags/#omnibox-default-typed-navigations-to-https

    Image: ZDNet More