More stories

  • in

    TikTok removed 89M videos, most of which from US

    TikTok has released its latest transparency report, revealing that more than 89.13 million videos were removed from its platform in the second half of 2020. The majority of these, at 11.78 million, are from the United States and 83.3% were yanked before they clocked any views.
    The videos, which accounted for under 1% of all videos uploaded on TikTok, were removed for violating various conditions detailed in the Chinese tech company’s community guidelines or terms of service. These included safety involving minors, violent and graphic content, illegal activities and regulated goods, and suicide and dangerous activities, according to its latest and fourth transparency report.
    Some 92.4% of videos were removed before users reported them and 93.5% within 24 hours of being posted. More than 6.14 million accounts were shuttered, while almost 9.5 million spam accounts were removed along with 5.23 million spam videos posted by these accounts. Some 173.25 million accounts were stopped from being created through automated means. 
    In addition, more than 3.5 million ads also were rejected for violating the company’s advertising policies and guidelines, said TikTok, which noted that it did not accept paid political ads. 
    Apart from the US, some 8.22 million videos removed originated in Pakistan while 7.51 million were from Brazil and 4.75 million were from Russia. Indonesia rounded up the top five countries, accounting for 3.86 million videos that were removed worldwide. 
    Amongst government agencies that submitted requests to restrict or remove content on the video platform, Russia led the pack with 135 such requests, followed by Pakistan at 97, and Australia at 32. 
    Owned by ByteDance, TikTok also operated a COVID-19 information hub, which it said clocked some 2.63 billion views in the second half of last year. Public service announcements directing users to the World Health Organisation and local public health resources were viewed more than 38.01 billion times. 

    TikTok added that it removed 51,505 videos for promoting COVID-19 misinformation, 86% of which were yanked out before users reported them and 87% within 24 hours of being uploaded on the platform. Some 71% did not clock any views before they were removed. 
    In the first half of 2020, the video platform removed more than 104.54 million videos, with India and the US contributing the most of such content at 37.68 million and 9.82 million, respectively. 
    RELATED COVERAGE More

  • in

    Minion privilege escalation exploit patched in SaltStack Salt project

    The Salt Project has patched a privilege escalation bug impacting SaltStack Salt minions that could be used during a wider exploit chain. 

    The vulnerability, CVE-2020-28243, is described as a privilege escalation bug impacting SaltStack Salt minions allowing “an unprivileged user to create files in any non-blacklisted directory via a command injection in a process name.” 
    The bug has been given a severity rating of 7.0 and impacts Salt versions before 3002.5.
    SaltStack’s Salt is an open source project and software designed for automation and infrastructure management. 
    In November, Immersive Labs’ security researcher Matthew Rollings performed a scan on the tool using Bandit, a Python application security scanner, and came across the bug as a result. 
    Salt includes a master system and minions, of which the latter facilitates commands sent to the master, and both often run as root. Rollings discovered a command injection vulnerability in minions when the master system summons a process called restartcheck. Exploits can be triggered if attackers use crafted process names, permitting local users to escalate their privileges on root — as long as they are able to create files on a minion in a non-forbidden directory. 
    With further investigation, the researcher noted it may also be possible to perform container escapes, including performing the exploit “within a container to gain command execution as root on the host machine.”

    In addition, Rollings said the vulnerability “may be performed by an attacker without local shell access, [and] under certain circumstances, remote users can influence process names.” However, this form of attack is considered “unlikely” and could be difficult to trigger. 
    The Salt Project resolved the vulnerability in a February security release. The group also patched other high-impact bugs including CVE-2021-3197, a shell injection flaw in Salt-API’s SSH client; CVE-2021-25281, an eAuth security issue that could allow remote attackers to run any wheel modules on the master, and CVE-2021-25283, a failure to protect against server-side template injection attacks. 
    ZDNet has reached out to the Salt Project and will update when we hear back. 
    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Businessman charged with intent to steal General Electric’s secret silicon technology

    A Chinese businessman has been charged with intent to steal General Electric’s (GE) processor technology. 

    On Friday, the US Department of Justice (DoJ) said that Chi Lung Winsman Ng, a 64-year-old resident of Hong Kong, allegedly plotted to steal MOSFET intellectual property with the overall goal of developing a business — and rival — based on GE’s technology. 
    According to the DoJ indictment, between roughly March 2017 and January 2018, Ng teamed up with a co-conspirator, a former GE engineer, to hash out a plan to steal the company’s proprietary data. 
    General Electric’s silicon carbide metal-oxide semiconductor field-effect transistors (MOSFETs) are semiconductor designs that the company has been working on for more than a decade. GE’s chips are used in a variety of products and have landed the firm contracts in both the automotive and military space. 
    Assistant Attorney General John Demers of the DoJ’s National Security Division said that Ng and co-conspirators “chose to steal what they lacked the time, talent or money to create.”
    The DoJ claims that the pair went so far as to create PowerPoint presentations to impress investors with their start-up’s business plan and told interested parties that the new company could be profitable within three years. 
    Ng and the engineer allegedly claimed that the start-up owned assets worth $100 million — including intellectual property — and sought to secure $30 million in funding. At least one meeting took place with a Chinese investor, according to US prosecutors. 

    “We have no evidence that there was an illegal MOSFET technology transfer to any Chinese companies, including the company that Ng and his co-conspirator were trying to start,” the DoJ added. 
    Ng has been charged with conspiracy to steal trade secrets. If arrested and found guilty, the businessman would face up to 10 years behind bars and punitive damages of up to $250,000. 
    “According to the indictment, Mr. Ng conspired to steal valuable and sensitive technology from GE and produce it in China,” commented Special Agent in Charge Thomas Relford of the FBI’s Albany Field Office. “Our office, the US Attorney’s Office, and GE coordinated closely and worked quickly to prevent that theft and the resulting damage to our economic security.”
    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Why would you ever trust Amazon's Alexa after this?

    Skillful, but not necessarily trustworthy?
    Amazon
    It was only the other day that I was wondering whether it would be fun to have a cuckoo clock in my kitchen.

    more Technically Incorrect

    An Amazon Alexa-powered cuckoo clock, that is.
    I concluded that the idea was arrant bonkers, as are most things Alexa-enabled.
    But we all have our prejudices and many Americans are only too delighted to have Amazon’s Echos and Dots strewn about their homes to make their lives easier.
    Why, Alexa can even buy you your mummy, should you want.
    Yet perhaps Alexa-lovers should be warned that things may not be as delightful as they seem.
    Skills? Oh, Everyone’s Got Skills.
    New research from concerned academics at Germany’s Ruhr-University Bochum, together with equally concerned colleagues from North Carolina State — and even a researcher who, during the project, joined Google — may just make Alexa owners wonder about the true meaning of an easy life.

    The researchers looked at 90,194 Alexa skills. What they found was a security Emmenthal that would make a mouse wonder whether there was any cheese there at all.
    How much would you like to shudder, oh happy Alexa owner?
    How about this sentence from Dr. Martin Degeling: “A first problem is that Amazon has partially activated skills automatically since 2017. Previously, users had to agree to the use of each skill. Now they hardly have an overview of where the answer Alexa gives them comes from and who programmed it in the first place.”
    So the first problem is that you have no idea where your clever answer comes from whenever you rouse Alexa from her slumber. Or, indeed, how secure your question may have been.
    Ready for another quote from the researchers? Here you go: “When a skill is published in the skill store, it also displays the developer’s name. We found that developers can register themselves with any company name when creating their developer’s account with Amazon. This makes it easy for an attacker to impersonate any well-known manufacturer or service provider.”
    Please, this is the sort of thing that makes us laugh when big companies get hacked — and don’t tell us for months, or even years.
    These researchers actually tested the process for themselves. “In an experiment, we were able to publish skills in the name of a large company. Valuable information from users can be tapped here,” they said, modestly.
    This finding was bracing, too. Yes, Amazon has a certification process for these skills. But “no restriction is imposed on changing the backend code, which can change anytime after the certification process.”
    In essence, then, a malicious developer could change the code and begin to hoover up sensitive personal data.

    Security? Yeah, It’s A Priority.
    Then, say the researchers, there are the skills developers who publish under a false identity.
    Perhaps, though, this all sounds too dramatic. Surely all these skills have privacy policies that govern what they can and can’t do.
    Please sit down. From the research: “Only 24.2% of skills have a privacy policy.” So three-quarters of the skills, well, don’t.
    Don’t worry, though, there’s worse: “For certain categories like ‘kids’ and ‘health and fitness’ only 13.6% and 42.2% skills have a privacy policy, respectively. As privacy advocates, we feel both ‘kids’ and ‘health’ related skills should be held to higher standards with respect to data privacy.”
    Naturally, I asked Amazon what it thought of these slightly chilly findings.
    An Amazon spokesperson told me: “The security of our devices and services is a top priority. We conduct security reviews as part of skill certification and have systems in place to continually monitor live skills for potentially malicious behavior. Any offending skills we identify are blocked during certification or quickly deactivated. We are constantly improving these mechanisms to further protect our customers.”
    It’s heartening to know security is a top priority. I fancy getting customers to be amused by as many Alexa skills as possible so that Amazon can collect as much data as possible, might be a higher priority.
    Still, the spokesperson added: “We appreciate the work of independent researchers who help bring potential issues to our attention.”
    Some might translate this as: “Darn it, they’re right. But how do you expect us to monitor all these little skills? We’re too busy thinking big.”
    Hey, Alexa. Does Anyone Really Care?
    Of course, Amazon believes its monitoring systems work well in identifying true miscreants. Somehow, though, expecting developers to stick to the rules isn’t quite the same as making sure they do.
    I also understand that the company believes kid skills often don’t come attached to a privacy policy because they don’t collect personal information.
    To which one or two parents might mutter: “Uh-huh?”
    Ultimately, like so many tech companies, Amazon would prefer you to monitor — and change — your own permissions, as that would be very cost-effective for Amazon. But who really has those monitoring skills?
    This research, presented last Thursday at the Network and Distributed System Security Symposium, makes for such candidly brutal reading that at least one or two Alexa users might consider what they’ve been doing. And with whom.
    Then again, does the majority really care? Until some unpleasant happenstance occurs, most users just want to have an easy life, amusing themselves by talking to a machine when they could quite easily turn off the lights themselves.
    After all, this isn’t even the first time that researchers have exposed the vulnerabilities of Alexa skills. Last year, academics tried to upload 234 policy-breaking Alexa skills. Tell me how many got approved, Alexa? Yes, all of them.
    The latest skills researchers themselves contacted Amazon to offer some sort of “Hey, look at this.”
    They say: “Amazon has confirmed some of the problems to the research team and says it is working on countermeasures.”
    I wonder what skills Amazon is using to achieve that. More

  • in

    Chrome will soon try HTTPS first when you type an incomplete URL

    Image: Google
    Google engineers have been some of the most ardent promoters of browser security features over the past few years and, together with the teams behind the Firefox and Tor browsers, have often been behind many of the changes that have shaped browsers into what they are today.

    From pioneering features like Site Isolation and working behind the scenes at the CA/B Forum to improve the state of the TLS certificate business, we all owe a great deal of gratitude to the Chrome team.
    But one of the biggest areas of interest for Chrome engineers over the past few years has been in pushing and promoting the use of HTTPS, both inside their browser, but also among website owners.
    As part of these efforts, Chrome now tries to upgrade sites from HTTP to HTTPS when HTTPS is available.
    Chrome also warns users when they’re about to enter passwords or payment card data on unsecured HTTP pages, from where they might be sent across a network in plaintext.
    And Chrome also blocks downloads from HTTP sources if the page URL is HTTPS —to avoid users getting tricked into thinking their download is secured but actually not.
    Changes to the Chrome Omnibox arriving in v90
    But even if around 82% of all internet sites run on HTTPS, these efforts are far from done. The latest of these HTTPS-first changes will arrive in Chrome 90, scheduled to be released in mid-April, this year.

    The change will impact the Chrome Omnibox —the name Google uses to describe the Chrome address (URL) bar.
    In current versions, when users type a link in the Omnibox, Chrome will load the typed link, regardless of protocol. But if users forget to type the protocol, Chrome will add “http://” in front of the text and attempt to load the domain via HTTP.
    For example, typing something like “domain.com” in current Chrome installs loads “http://domain.com.”
    This will change in Chrome 90, according to Chrome security engineer Emily Stark. Starting with v90, the Omnibox will load all domains where the domain was left out via HTTPS, with an “https://” prefix instead.
    “Currently, the plan is to run as an experiment for a small percentage of users in Chrome 89, and launch fully in Chrome 90, if all goes according to plan,” Stark explained on Twitter this week.
    Users who’d like to test the new mechanism can do so already in Chrome Canary. They can visit the following Chrome flag and enable the feature:
    chrome://flags/#omnibox-default-typed-navigations-to-https

    Image: ZDNet More

  • in

    Berlin resident jailed for threatening to bomb NHS hospital unless Bitcoin ransom was paid

    A Berlin resident has been found guilty of threatening to bomb a hospital and attempting to blackmail the UK’s National Health Service (NHS) for £10 million in Bitcoin (BTC).

    Emil Apreda, previously only identified as Emil A. due to German law, is a 33-year-old Italian and resident of Berlin, Germany, with a background in computing. 
    On Friday, the presiding judge in the District Criminal Court of Berlin convicted Apreda and sentenced him to three years in prison.
    Apreda was accused of sending emails to the NHS between April and June 2020, in which he threatened to detonate a bomb in an unspecified hospital in the United Kingdom unless he was paid £10 million ($14m) in cryptocurrency. 
    Nigel Leary, Deputy Director of the UK’s National Crime Agency (NCA)’s National Cyber Crime Unit, said in a briefing on Thursday that his threats “escalated” over a period of six weeks. 
    The first email was sent on April 25, during the first UK lockdown. The NHS was the first subject of the threats, with Apreda saying he would deposit an “explosive package” in a hospital unless his demands were met. The NCA was also sent the same email within hours. 
    Apreda monitored world events and attempted to take advantage not only of the COVID-19 pandemic but also claimed he would plant explosives at Black Lives Matter protests. In addition, the intelligence agency says that Apreda threatened the safety of members of parliament around the time of the anniversary of the murder of Labour MP Jo Cox. 

    Together, the NCA believes the threats were a “social engineering” attempt designed to “elicit the response he was after” — the cryptocurrency payment. The intelligence agency has no reason to suspect that Apreda had any access to explosive materials. 
    The NHS did not respond to the blackmail attempts. 
    Prosecutors said that the “attempted extortion” continued until his arrest in June, in which UK intelligence worked with overseas partners to obtain a warrant and force entry into the suspect’s home.  
    Apreda’s trial began on December 11 in Germany. He has now been sentenced but has been released on bail until the decision has been ratified. 
    The NCA took the threat seriously; Leary noting that at a time when the COVID-19 pandemic was entering full swing, there was a “deep and heightened vulnerability” in the medical system.
    The investigation into the culprit required a “dynamic and significant response,” according to the agency. The potential risk was heightened as Apreda claimed he was part of “Combat 18,” and while not prescribed as a terrorist organization in the UK, is still a group with extremist, far-right leanings. 
    Hospitals, by their nature, are open areas and during the first lockdown were one of the few areas in which there were mass gatherings of people. 
    “We had to step in pretty quickly and make sure that everything that could be done, was done,” Leary commented, but added that “nothing should be done to deter people from seeking medical treatment.”
    Apreda was not extradited but would have faced “similar” charges in the UK, according to the NCA.
    In June, YouTuber Matthew Wain was jailed for 12 weeks after he recorded himself making a bomb threat and saying that he hoped NHS staff at Birmingham City Hospital “died of coronavirus.”
    The footage was posted online in March. The 31-year-old later claimed he was dissatisfied with the treatment he had received at the hospital and that the online rant was nothing more than an “empty threat.” 
    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Go malware is now common, having been adopted by both APTs and e-crime groups

    The number of malware strains coded in the Go programming language has seen a sharp increase of around 2,000% over the last few years, since 2017, cybersecurity firm Intezer said in a report published this week.
    The company’s findings highlight and confirm a general trend in the malware ecosystem, where malware authors have slowly moved away from C and C++ to Go, a programming language developed and launched by Google in 2007.
    Intezer: Go malware, now a daily occurrence
    While the first Go-based malware was detected in 2012, it took, however, a few years for Golang to catch on with the malware scene.
    “Before 2019, spotting malware written in Go was more a rare occurrence and during 2019 it became a daily occurrence,” Intezer said in its report.
    But today, Golang (as it’s often also referred to instead of Go) has broken through and has been widely adopted.

    techrepublic cheat sheet

    It is used by nation-state hacking groups (also known as APTs), cybercrime operators, and even security teams alike, who often used it to create penetration-testing toolkits.
    There are three main reasons why Golang has seen this sudden sharp rise in popularity. The first is that Go supports an easy process for cross-platform compilation. This allows malware developers to write code once and compile binaries from the same codebase for multiple platforms, allowing them to target Windows, Mac, and Linux from the same codebase, a versatility that they don’t usually have with many other programming languages.

    The second reason is that Go-based binaries are still hard to analyze and reverse engineer by security researchers, which has kept detection rates for Go-based malware very low.
    The third reason is related to Go’s support for working with network packets and requests. Intezer explains:
    “Go has a very well-written networking stack that is easy to to work with. Go has become one of the programming languages for the cloud with many cloud-native applications written in it. For example, Docker, Kubernetes, InfluxDB, Traefik, Terraform, CockroachDB, Prometheus and Consul are all written in Go. This makes sense given that one of the reasons behind the creation of Go was to invent a better language that could be used to replace the internal C++ network services used by Google.”
    Since malware strains usually tamper, assemble, or send/receive network packets all the time, Go provides malware devs with all the tools they need in one place, and it’s easy to see why many malware coders are abandoning C and C++ for it. These three reasons are why we saw more Golang malware in 2020 than ever before.
    “Many of these malware [families] are botnets targeting Linux and IoT devices to either install crypto miners or enroll the infected machine into DDoS botnets. Also, ransomware has been written in Go and appears to become more common,” Intezer said.
    Examples of some of the biggest and most prevalent Go-based threats seen in 2020 include the likes of (per category):
    Nation-state APT malware:
    Zebrocy – Russian state-sponsored group APT28 created a Go-based version of their Zebrocy malware last year.
    WellMess – Russian state-sponsored group APT29 deployed new upgraded versions of their Go-based WellMess malware last year.
    Godlike12 – A Chinese state-sponsored group deployed Go-based backdoors for attacks on the Tibetan community last year.
    Go Loader – The China-linked Mustang Panda APT deployed a new Go-based loader last year for their attacks.
    E-crime malware:
    GOSH – The infamous Carbanak group deployed a new RAT named GOSH written in Go last August.
    Glupteba – New versions of the Glupteba loader were seen in 2020, more advanced than ever.
    A new RAT targeting Linux servers running Oracle WebLogic was seen by Bitdefender.
    CryptoStealer.Go – New and improved versions of the CryptoStealer.Go malware were seen in 2020. This malware targets cryptocurrency wallets and browser passwords.
    Also, during 2020, a clipboard stealer written in Go was found.
    New ransomware strains written in Go:
    Naturally, in light of its recent discoveries, Intezer, along with others, expect Golang usage to continue to rise in the coming years and join C, C++, and Python, as a preferred programming language for coding malware going forward. More

  • in

    Oxford University lab with COVID-19 research links targeted by hackers

    An Oxford University lab conducting research into the coronavirus pandemic has been compromised by cyberattackers. 

    Oxford University, one of the most prominent educational institutions in the UK, was made aware of the security breach on Thursday. 
    The university confirmed that a security incident took place at the Division of Structural Biology lab, also known as “Strubi,” after Forbes disclosed that hackers were boasting of access to the school’s systems. 
    Strubi’s labs are used by students studying molecular and biological science, and during the COVID-19 pandemic, the Oxford team has been researching the virus itself and examining vaccine candidates. 
    The school’s latest publications include work on RNA strands and viruses, as well as antiviral agents. However, the group has not been directly involved in the development of the Oxford University-AstraZeneca vaccine. 
    According to Forbes and Hold Security, the lab’s “biochemical preparation machines” were compromised by the unknown attackers who boasted of their break-in to what appears to be lab equipment, pumps, and pressure tools in an attempt to sell access to their victim’s systems.
    Timestamps of February 13 and 14, 2021, were noted in evidence provided to the publication. 

    Oxford University has confirmed the security breach. However, in a statement, the university said there “has been no impact on any clinical research, as this is not conducted in the affected area.”
    In addition, the cyberattackers do not appear to have compromised any system relating to patient data or records. 
    “We are aware of an incident affecting Oxford University and are working to fully understand its impact,” an Oxford University spokesperson told Forbes. 
    The UK’s GCHQ has been informed and the National Cyber Security Center (NCSC) will investigate the incident. 
    This is not the first time a university may have been targeted with coronavirus or vaccine research in mind. In May 2020, the NCSC warned that threat actors from Russia, Iran, and China were targeting British universities and research hubs to steal research. 
    The European Medicines Agency (EMA), unfortunately, was successfully attacked in December and the cyberattackers responsible then leaked stolen data relating to COVID-19 vaccines and medicines in January this year. 
    In late 2020, Interpol warned of a wave of COVID-19 and flu vaccine-related cybercrimes. The law enforcement agency said that the worldwide pandemic had “triggered unprecedented opportunistic and predatory criminal behavior.”
    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More