More stories

  • in

    The real reason Apple is warning users about MacBook camera covers

    Earlier this month, Apple published a support document that warned MacBook owners against closing their laptop with a camera cover fitted. And just as with the whole wearing masks in public debate, some people don’t like being told what to do, even it is for their own good.

    First off, some clarity.
    Apple didn’t say, “don’t use a camera cover.” Apple clearly said: “Don’t close your MacBook, MacBook Air, or MacBook Pro with a cover over the camera.”
    Apple even went on to clarify the issue:

    “If you close your Mac notebook with a camera cover installed, you might damage your display because the clearance between the display and keyboard is designed to very tight tolerances. Covering the built-in camera might also interfere with the ambient light sensor and prevent features like automatic brightness and True Tone from working.”

    I spoke to an Apple repair technician, who, on condition of anonymity, gave ZDNet a rundown of the problem.

    “What we’ve been told is that since people have started to work and study from home more, the use of camera covers has gone up dramatically,” the repair tech told me. “It makes sense, people are using video more and more, and it can feel intrusive, so being able to slide a cover across the camera offers some privacy even mid-meetings where people might not want to disconnect. But consequently, the number of screen breakages are up. And it’s a pretty distinctive screen break — leaving a glowing white line down the middle of the display — so we know why it’s happened even if people are evasive about how the damage happened.”
    Another reason is tighter tolerances.
    “That new 16-inch MacBook Pro has the thinnest bezel I’ve seen,” the veteran tech said, and they’ve been doing this job for a number of years and has handled pretty much everything Apple has made during that time. “It’s almost non-existent, and anything that gets in-between the screen and the body can break the display in a heartbeat. They do, they just go ‘pop’ and the damage is done.”
    “Only the other day, I saw a display that had been cracked when someone has closed a coin on it. Left a nice print of the coin on the display. The owner told me ‘it just happened on its own’… hmmm, OK.”

    Must read: Five iPhone security settings you should check today
    And it’s not just camera covers that are the problem.
    “We see MacBooks come in with all sorts of junk stuck over the camera,” the technician told me, “from gum stickers. In the very early days of the pandemic, I had a chap come in with a MacBook where he’d enthusiastically glued a bit of plastic over the camera and now wanted it to work because his boss was going to be using Zoom more. It wasn’t going to come off, and it was cheaper for him to buy a webcam than have the display replaced.”
    “Oh, and there was this other one,” they reminisced. “It had a Band-Aid on over the camera. And it looked used. Hmmm.”
    Apple’s support document includes information for those who need to use camera covers, and it states that the camera cover must not be thicker than an average piece of printer paper (0.1mm) and shouldn’t leave adhesive residue. Owners who use a camera cover that is thicker than 0.1mm are warned to remove it before closing the laptop.
    Something like a Post-It Note. 
    There is, however, some good news in all this if you have busted your display.
    “Yes, AppleCare+ covers this damage,” the technician confirmed. “Without that, a display replacement is a seriously expensive repair job, especially on that new 16-inch MacBook Pro.”
    Instead of using camera covers, Apple recommends keeping an eye on that camera indicator light. If it’s glowing green, the camera is on. If not, it’s off. And Apple claims the camera has been engineered such that it cannot be activated without the camera indicator light coming on. More

  • in

    Google: Mitigating disinformation and foreign influence through social media a joint effort

    Google Australia believes long term success in mitigating disinformation and foreign influence through social media rests on the development of a culture of online safety across society, including through ongoing “collaboration” between the likes of industry, the technical community, and government.
    According to Google, such efforts must be partnered with efforts to educate users and organisations, from school students through to senior citizens and company employees on how to secure their online presence and to “apply critical thinking to the information they see and consume”. 
    The remarks were made in the company’s submission [PDF] to the Select Committee on Foreign Interference through Social Media, which also contained an overview of the work its parent company has done to counter coordinated influence operations and other government-backed attacks.
    In its submission to the committee looking into the risk posed by foreign interference through social media, the local arm of the search giant said it takes its responsibility “very seriously”.
    “How companies like Google address these concerns has an impact on society and on the trust users place in our services,” it wrote.

    “We believe that meeting it begins with providing transparency into our policies, inviting feedback, enabling users to understand and control their online engagement, and collaborating with policymakers, civil society, and academics around the world in the development of sensible, effective policies, and processes.”
    In its submission, Google said algorithms cannot determine whether a piece of content on current events is true or false, nor can they assess the intent of its creator just by reading what’s on a page. It said, however, there are clear cases of intent to manipulate or deceive users.
    “For instance, a news website that alleges it contains ‘Reporting from Canberra, Australia’ but whose account activity indicates that it is operated out of Eastern Europe is likely not being transparent with users about its operations or what they can trust it to know firsthand,” Google wrote.
    It said the policies across Google Search, Google News, YouTube, and its advertising products outline behaviours that are prohibited to address such situations.
    Google said its Threat Analysis Group (TAG) reported disabling influence campaigns originating from groups in Iran, Egypt, India, Serbia, and Indonesia in the first quarter of 2020. It also removed more than a thousand YouTube channels that were apparently part of a large campaign and that were “behaving in a coordinated manner”.
    “On any given day, Google’s Threat Analysis Group is tracking more than 270 targeted or government-backed attacker groups from more than 50 countries,” it wrote.
    Since the beginning of 2020, Google said it had seen a rising number of attackers, including those from Iran and North Korea, impersonating news outlets or journalists. In April this year, Google sent 1,755 warnings to users whose accounts were targets of government-backed attackers.
    “We intentionally send warnings in timed batches to all users who may be at risk, rather than at the moment we detect the threat itself, so that attackers cannot track some of our defence strategies,” the submission said. “We also notify law enforcement about what we’re seeing, as they have additional tools to investigate these attacks.”
    The search giant also said it detected 18 million malware and phishing Gmail messages per day related to COVID-19, in addition to more than 240 million COVID-related daily spam messages.
    “Our machine learning models have evolved to understand and filter these threats, and we continue to block more than 99.9% of spam, phishing, and malware from reaching our users.
    “Google’s TAG has specifically identified over a dozen government-backed attacker groups using COVID-19 themes as lure for phishing and malware attempts — trying to get their targets to click malicious links and download files, including in Australia,” it added.
    “We have an important responsibility to our users and to the societies in which we operate to curb the efforts of those who aim to propagate false information on our platforms.”
    RELATED COVERAGE More

  • in

    Hacker breaches security firm in act of revenge

    Getty Images/iStockphoto

    A hacker claims to have breached the backend servers belonging to a US cyber-security firm and stolen information from the company’s “data leak detection” service.
    The hacker says the stolen data includes more than 8,200 databases containing the information of billions of users that leaked from other companies during past security breaches.
    The databases have been collected inside DataViper, a data leak monitoring service managed by Vinny Troia, the security researcher behind Night Lion Security, a US-based cyber-security firm.
    A data leak monitoring service is a common type of service offered by cyber-security firms. Security companies scan the dark web, hacking forums, paste sites, and other locations to collect information about companies that had their data leaked online.
    They compile “hacked databases” inside private backends to allow customers to search the data and monitor when employee credentials leak online, when the companies, themselves, suffer a security breach.
    The DataViper hack

    Earlier today, a hacker going by the name of NightLion (the name of Troia’s company), emailed tens of cyber-security reporters a link to a dark web portal where they published information about the hack.

    Image: ZDNet
    The site contains an e-zine (electronic magazine) detailing the intrusion into DataViper’s backend servers. The hacker claims to have spent three months inside DataViper servers while exfiltrating databases that Troia had indexed for the DataViper data leak monitoring service.
    The hacker also posted the full list of 8,225 databases that Troia managed to index inside the DataViper service, a list of 482 downloadable JSON files containing samples from the data they claim to have stoled from the DataViper servers, and proof that they had access to DataViper’s backend.
    Furthermore, the hacker also posted ads on the Empire dark web marketplace where they put up for sale 50 of the biggest databases that they found inside DataViper’s backend.

    Image: ZDNet
    Most of the 8,200+ databases listed by the hacker were for “old breaches” that originated from intrusions that took place years before, and which had been known and leaked online already, in several locations.
    However, there were also some new databases that ZDNet was not able to link to publicly disclosed security breaches. ZDNet will not be detailing these companies and their breaches, as we have requested additional details from the hacker, and are still in the process of verifying their claims.
    Troia: Hacker breached a test server
    In a phone call today with ZDNet, Troia admitted that the hacker gained access to one of the DataViper servers; however, the Night Lion Security founder said the server was merely a test instance.
    Troia told ZDNet that he believes the hacker is actually selling their own databases, rather than any information they stole from his server.
    The security researcher said this data had been public for many years, or, in some cases, Troia obtained it from the same communities of hackers in which the leaker is also part of.
    Troia told ZDNet that he believes the leaker is associated with several hacking groups such as TheDarkOverlord, ShinyHunters, and GnosticPlayers.
    All the groups have a prolific hacking history, are responsible for hundreds of breaches, some of which Troia indexed in his DataViper database.
    Furthermore, Troia also documented the activities of some of these groups in a book he published this spring. The DataViper founder says today’s leak was timed to damage his reputation before a talk he’s scheduled to give on Wednesday at the SecureWorld security conference about some of the very same hackers, and their supposed real-world identities.
    Troia’s full statement is below:

    “When people think they are above the law, they get sloppy. So much so they forget to look at their own historical mistakes. I literally detailed an entire scenario in my book where I allowed them to gain access to my web server in order to get their IPs. They haven’t learned. All they had access to was a dev environment. Much like the grey Microsoft hack which they recently took credit for, all they had was some source code that turned out to be nothing special, but they hyped it anyway hoping to get people’s attention. These are the actions of scared little boys pushed up against a wall facing the loss of their freedom.”

    Additional reporting will follow throughout the week as ZDNet goes through the leaked data. More

  • in

    AustCyber says digital trust required to boost Aussie economy

    Fresh from Prime Minister Scott Morrison telling Australians the country was under attack from an unnamed state actor and pledging AU$1.35 billion to boost local cybersecurity capabilities, AustCyber has touted the importance of digital trust in securing the nation’s economy.
    AustCyber is a non-profit organisation charged with growing a local cybersecurity ecosystem and facilitating its global expansion.
    “Australia’s digital infrastructure and the data it carries are core to the value and growth of the nation’s economy,” AustCyber CEO Michelle Price said in Australia’s Digital Trust Report 2020.
    “The growing economic dependency on the digital domain has an intrinsic relationship with the trust users and consumers have in it and therefore the security, privacy, and resilience of the infrastructure and data.”
    According to Price, a globally competitive Australian cybersecurity sector will ultimately underpin the future success of every industry in the national economy.

    The report [PDF], released on Monday, explains digital trust as the level of confidence users have in the ability of technology to “enable a high functioning cyber-physical world”. AustCyber said it is earned by providing secure, private, safe, and reliable access to technology, as well as the ways in which technology has been designed, constructed, and delivered.
    It said cybersecurity is a foundational pillar of digital trust in the economy.
    AustCyber said digital activity currently contributes AU$426 billion to the Australian economy, and generates AU$1 trillion in gross economic output.
    It also said a four-week digital interruption to Australia’s economy, such as a widespread cyber attack, would cost the Australian economy up to AU$30 billion or 1.5% of Australia’s GDP, and over 163,000 jobs.
    Offering suggestions on how to boost local cybersecurity capability, AustCyber said more is needed by governments and industry, asking they use sovereign cybersecurity products and services, as part of the current focus on digital transformation; increase investment into Australia’s cybersecurity sector; and work together to develop a culture and reputation for high “digital trust” in Australia to attract sustained investment and drive jobs growth.
    The report also modelled the impact of COVID-19 on the Australian economy with a specific focus on overall digital activity, consumer behaviour, and digital infrastructure. It said total positive economic revenue impact is expected to be up to AU$230 billion in the digital activity, online retail, and IT industry sectors, resulting in a GDP gain of up to AU$91 billion per annum for 2021 and 2022. AustCyber added that the job benefit impact is expected to be up to 505,700 per year.
    Later on Monday, Shadow Assistant Minister for Cyber Security Tim Watts offered his take on AustCyber’s report, saying the government is failing to provide the leadership needed to address the threat of cybersecurity incidents.”This isn’t just a job for our defence and security agencies. In an interconnected and interdependent economy, we can only confront this growing threat by building resilience throughout the nation, in businesses big and small and across all levels of government,” Watts said.
    “To do this requires leadership — but the first decision Scott Morrison made upon becoming Prime Minister was to abolish the dedicated ministerial role for cybersecurity. It was a disastrous decision that has left cybersecurity policy politically orphaned.”
    Updated Monday 13 July 2020 at 4:00pm AEST: Added comments from Shadow Assistant Minister for Cyber Security Tim Watts.
    RELATED COVERAGE More

  • in

    Russian hacker found guilty for Dropbox, LinkedIn, and Formspring breaches

    Image via autorambler.ru

    A jury found Russian hacker Yevgeniy Nikulin guilty for breaching the internal networks of LinkedIn, Dropbox, and Formspring back in 2012 and then selling their user databases on the black market.
    The jury verdict was passed on Friday during what was the first trial to be held in California since the onset of the coronavirus (COVID-19) pandemic.
    The three hacks
    According to court documents and evidence presented at the trial, Nikulin hacked all three companies in the spring of 2012.
    The hacker first breached LinkedIn between March 3 and March 4, 2012, after he infected an employee’s laptop with malware that allowed Nikulin to abuse the employee’s VPN and access LinkedIn’s internal network.
    From here, the hacker stole roughly 117 million user records, data that included usernames, passwords, and emails.

    Nikulin then used the LinkedIn data to send spear-phishing emails to employees at other companies, including people working at Dropbox, where he was able to breach an employee account, and then invite himself to a Dropbox folder holding company data.
    This intrusion lasted from May 14, 2012, to July 25, 2012, and authorities say Nikulin was able to make off with a trove of information on 68 million Dropbox users, including usernames, emails, and hashed passwords.
    Nikulin was also able to phish his way into the employee account of a Formspring engineer, from where, between June 13, 2012, and June 29, 2012, he is believed to have gained access to the company’s internal user database, consisting of 30 million user details.
    Nikulin then sold the data on the underground hacker market to other cyber-criminals. The data surfaced online in 2015 and 2016, as various data traders put the data for sale on publicly-accessible forums and criminal e-commerce stores.
    The arrests, extradition, and US trial
    Authorities started an investigation after the three companies filed criminal complaints in California, in 2015. Nikulin was arrested a year later, in October 2016, while vacationing in Prague with his girlfriend.
    A Radio Free Europe editorial published in 2016 highlighted Nikulin’s extravagant lifestyle financed by his hacking activities. This included several luxury cars, expensive watches, and travels around Europe. In an interview with Russia site AutoRambler, Nikulin admitted to owning a Lamborghini Huracan, a Bentley, a Continental GT, and a Mercedes-Benz G-Class.
    Despite attempts to fight his extradition in the Czech Republic, the hacker was eventually sent to the US in the summer of 2017, where he was arraigned in front of a judge.
    Since 2017, the hacker remained incarcerated. During all of this, Nikulin changed lawyers several times, refused to cooperate with the investigation or reach a plea deal, was moved through multiple jails, and was examined by psychologists under the court’s order amid concerns for his mental health from the judge after Nikulin refused to talk with councils and appear in front of the court. Nikulin was found to be mentally apt for a trial.
    The actual trial was initially set for early 2020 but was delayed twice due to the coronavirus pandemic.
    During the trial, which took place under special circumstances and protective measures, Nikulin pled not guilty. US prosecutors proved their case, but they also tried to pin him to other hacks and criminal conspiracies.
    The judge supervising the case called the prosecution’s efforts into question just days before the trial ended, describing their efforts and evidence as “mumbo jumbo,” wondered if the prosecutors were wasting the jury’s time, and also asked out loud if the prosecutors had any real evidence against Nikulin besides private messages sent between two nicknames on internet chats.
    However, despite the judge critiquing the prosecutors for their handling of the case, the jury found Nikulin guilty after only six hours of deliberations.
    Nikulin’s sentencing was scheduled for September 29, 2020. More

  • in

    Researchers create magstripe versions from EMV and contactless cards

    Man use smart phone and holding credit card with shopping online. Online payment concept.
    Getty Images/iStockphoto
    A British security researcher has proven this week that it is still possible in 2020 to create older-generation magnetic stripe (magstripe) cards using details found on modern chip-and-PIN (EMV) and contactless cards, and then use the cloned cards for fraudulent transactions.
    In a whitepaper named “It Only Takes A Minute to Clone a Credit Card, Thanks to a 50-Year-Old Problem,” Leigh-Anne Galloway, Head of Commercial Security Research at Cyber R&D Lab, tested modern card technologies from 11 banks from the US, the UK, and the EU.
    Galloway discovered that four of the 11 banks still issued EMV cards that could be cloned into a weaker magstripe version that could be abused for fraudulent transactions.

    Image: Cyber R&D Lab
    Under normal circumstances, this should not be possible. EMV cards were designed to be hard to clone, primarily due to the secure chip included with each one.
    However, Galloway’s whitepaper explains in a step-by-step guide on how to take data from an EMV card and create an older-generation magnetic stripe clone.

    This technique — of cloning a magstripe version from an EMV card — is not new and has been documented as far back as 2007.

    I demonstrated cloning from chip data to magstripe but the banks said that cards issued after 2008 would not be vulnerable and chip data would be “useless to the fraudster”. This new research shows that the problem still has not been fixed, 12 years on https://t.co/6VX8n84hDb
    — Steven Murdoch (@sjmurdoch) July 10, 2020

    Cloning magstripes from EMV data is, in fact, the way how many carding gangs still operate today.
    Crooks use skimmer or shimmer devices to collect data on EMV cards, they create a magstripe clone, and then they use this clone to make fraudulent transactions at Point-of-Sale (POS) systems or withdraw money from ATMs in third-world countries where EMV cards have not been rolled out and magstripe cards are still accepted.
    Banking industry still slow to adopt proper security practices
    In her whitepaper, Galloway explains why this is still possible.
    “First, the commonalities between magstripe and EMV standards for chip inserted and contactless mean that it’s possible to determine valid cardholder information from one technology and use it for another,” Galloway said.
    “Secondly, magstripe is still a supported payment technology, likely because the adoption of chip-based cards has been slow in some geographic regions around the world.
    “Third, although magstripe is a deprecated technology in many of the countries tested, cloned data is still effective because it is possible to cause the terminal and card to fallback to a magstripe swipe transaction,” the researcher added.
    “Finally, card security codes, a critical point of card verification, are not checked at the time of the transaction by all card issuers.”
    This last point is the more significant issue. As Galloway pointed out in a conversation on Twitter with this reporter, card security codes (CSC, CVV, or CVC values printed on a card) should be unique per technology and should always be validated.

    The card security code (cvv etc) should actually be unique to the method: chip/nfc/mag stripe. The main point is that issuers do not correctly validate transaction data as a result skimmers and fraud are still big business
    — Leigh-Anne Galloway (@L_AGalloway) July 9, 2020

    While banks don’t have full control of what card/payment technologies are supported in other countries, and they’ll still have to support older technologies for legacy purposes, they have the power to verify transactions correctly.
    However, as Steven Murdoch, Research Fellow at University College London, also pointed out on Twitter, the reality is that banks still fail to enforce this simple rule, even now, in 2020.
    Transactions are still approved with the wrong security code, from another card technology, and even without it. By not properly verifying security codes, this leaves the door open for carding gangs to continue to operate by copying and downgrading the newer EMV cards into magstripe clones to abuse overseas, in countries where magstripe cards are still accepted.

    Back in 2007, UK issued cards had an exact copy of the magstripe on the chip. From 2008 cards were supposed to have a different CVV between the magstripe and the chip. However this new security feature is pointless if magstripe transactions with the wrong CVV are accepted!
    — Steven Murdoch (@sjmurdoch) July 10, 2020

    The card security code (cvv etc) should actually be unique to the method: chip/nfc/mag stripe. The main point is that issuers do not correctly validate transaction data as a result skimmers and fraud are still big business
    — Leigh-Anne Galloway (@L_AGalloway) July 9, 2020

    Galloway said that while the whitepaper focused on EMV cards, contactless (NFC-based) cards can also be abused in the same way to create magstripe clones to be abused for fraudulent transactions. More

  • in

    Amazon tells employees to remove TikTok from their phones due to security risk

    Online retail giant Amazon has told employees this week to uninstall the TikTok mobile app from the smartphones they use to access Amazon’s internal email servers.
    According to an email sent to employees today, and seen by ZDNet, workers have until July 10 to remove the TikTok app from their devices.
    The email cited a “security risk” to using the TikTok app, but didn’t go into details. The email’s full text is available below:
    “Due to security risk, the TikTok app is no longer permitted on mobile devices that access Amazon email. If you have TikTok on your device, you must remove it by 10-Jul to retain mobile access to Amazon email. At this time, using TikTok from your Amazon laptop browser is allowed.”
    An Amazon spokesperson did not immediately respond to a request for comment.

    In recent months, privacy and security experts have accused the TikTok app of collecting extensive swaths of user information from the devices it was installed — according to reverse engineers who posted their findings on Reddit, and mobile security firm Zimperium.
    Many have accused the Chinese app — without proof — of collecting information from users and passing it to the Chinese government.
    Although never proven, these accusations have created a general panic and weariness around the app, especially when used by officials and other high-value individuals.
    As a result of these accusations, since last year, TikTok has been banned by the US military, the Indian government, and the Indian army, just to name a few. More

  • in

    Smartwatch tracker for the vulnerable can be hacked to send medication alerts

    Researchers have disclosed a set of serious security issues in a smartwatch tracker used in applications including services designed for the support of the elderly and vulnerable.

    On Thursday, cybersecurity experts from Pen Test Partners disclosed security problems found in the SETracker service, software geared towards children and the elderly — especially those with dementia or individuals that need reminders to complete daily tasks, such as taking their medication. 
    The GPS tracker app can be used in tandem with a smartwatch by carers to find their charges, and in turn, wearers can use the system to make a call if they need help. 
    See also: Researchers connect Evilnum hacking group to cyberattacks against Fintech firms
    Chinese developer 3G Electronics’ SETracker app, required to use the watches, is available on iOS and Android and has been downloaded over 10 million times. 

    However, security flaws in the product meant that it was not only carers or loved ones that could keep track of a wearer’s movements or activities. 
    The vendor’s software, of which there are now three mobile app varieties, is often used in the backend of cheap smartwatches on offer from a variety of brands. SETracker is also found in headsets and in the automotive software industry. 
    According to Pen Test Partners, the first major security issue was the discovery of an unrestricted server to server API. The server could be used to hijack the SETracker service in ways including, but not limited to, changing device passwords, making calls, sending text messages, conducting surveillance, and accessing cameras embedded in devices.
    If a monitor’s backend system is based on SETracker, it was possible to send fake messages including “TAKEPILLS” commands, which are set up to remind wearers to take their medication. 
    “A dementia sufferer is unlikely to remember that they had already taken their medication,” the researchers noted. “An overdose could easily result.”
    CNET: China aims to dominate everything from 5G to social media — but will it?
    The researchers also came across the software’s source code, which was accidentally made publicly available via a compiled node file hosted online as a backup without protection. 
    Server-side code, MySQL passwords, email, SMS, and Redis credentials, and a hard-coded password in the source code — 123456 — were available to view. A database containing user images was also open to abuse. 
    “The source code indicated that this bucket was where ALL the pictures taken by devices are sent. We have not confirmed that,” Pen Test Partners says. “Given the use case of these devices is predominately children’s trackers it is extremely likely these images will contain images of children.”
    TechRepublic: Highest-paying tech jobs: Where to find them
    It is not known if any of the security issues have been exploited in the wild. 
    Pen Test Partners disclosed its findings to 3G Electronics on January 22. The vendor did not respond until February 12. Triage then followed with the disclosure of the server API vulnerabilities on February 17, which was then fixed a day later. 
    On May 20, the researchers reported the node file issue to the vendor, and on May 29, 3G Electronics confirmed that the file had been removed and all passwords had been changed. 

    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More