More stories

  • in

    Singapore sends out drones to watch over reservoirs

    Singapore is sending out drones to monitor water quality and activities at its reservoirs. It hopes this will reduce the number of hours currently needed to perform such tasks by 5,000 man-hours. Officers currently spend 7,200 man-hours a year carrying out various duties at these water catchment areas across the island. These include daily patrols to identify excessive growth of aquatic plants and algal blooms, which could affect water quality. Data also is collected on water activities, such as fishing and paddling, in and along the edge of the reservoir to ensure these are carried out safely.Singapore’s water agency PUB said in a statement Thursday it would deploy Beyond Visual Line of Sight drones to initially monitor two reservoirs–MacRitchie and Marina–before another another four to the roster later this year. These would include Serangoon, Kranji, Lower Seletar, and Lower Peirce. 

    Singapore puts budget focus on transformation, innovation

    After tilting last year’s budget towards ’emergency support’ in light of the global pandemic, Singapore’s government will spend SG$24 billion ($18.1 billion) over the next three years to help local businesses innovate and build capabilities needed to take them through the next phase of transformation.

    Read More

    The drones would be equipped with remote sensing systems and a camera to facilitate near real-time video analytics. They had been designed specifically to monitor water quality and activities, said PUB, a statutory board that is responsible for Singapore’s water supply and catchment as well as used water. Drone flights at Marina and MacRitchie would be deployed four days a week, at regular intervals throughout the day. These would run at a lower frequency, of one to two days weekly, at the other four reservoirs.Tapping a network of rivers, canals, and drains, rainwater on two-thirds of the city-state’s land area is redirected to 17 reservoirs and harvested for potable consumption.According to PUB, the drones would be able to survey large areas of the reservoir and collect “comprehensive data”. They also would send out alerts when certain activities were detected, such as illegal fishing. 

    Local vendor ST Engineering had been contracted to deploy its drone operating system DroNet, which had been further customised to cater to PUB’s needs. Trials were conducted at the reservoirs last year. The drones would be stored at an automated pod, from which it would take off and land autonomously. Each would fly on pre-programmed flight plans within the reservoir compound and remotely monitored by an operator.  The drone’s remote sensing technology would analyse the water for turbidity and algae concentration, which would provide indications of water quality. Where necessary, PUB officers would visit the site to collect water sample for laboratory analysis.Video analytics algorithm also had been developed, and tested, to identify aquatic plant overgrowth in the reservoir using live video feed from the drone’s camera. PUB officers would monitor the video feed as well as data via an online dashboard. When illegal water activities were detected, near real-time alerts would be sent to a dedicated Telegram channel, which officers could access via their mobile phones. The agency said cameras on the drones would not gather personal data including facial recognition. Their flight plans also would not be near residential areas. Noting that Singapore’s reservoirs were an important source of water supply for the population, PUB’s director of catchment and waterways Yeo Keng Soon said it was challenging in terms of manpower to effectively monitor what went on at each reservoir and ensure the reservoirs remained in optimal condition. “Our use of drones is in line with PUB’s commitment to leverage technology as part of the SMART PUB roadmap to improve our operations and meet future needs,” Yeo said. “With the drones, we can channel manpower to more critical works, such as the inspection and maintenance of reservoir gates, as well as pump and valve operations. The drones also act as an early warning system that enhances our response time to the myriad of issues that our officers grapple with on a daily basis.”RELATED COVERAGE More

  • in

    This phishing attack is using a call centre to trick people into installing malware on their Windows PC

    A prolific phishing campaign is attempting to trick people into believing they’ve subscribed to a movie-streaming service to coerce them into calling a phone number to cancel – where someone will guide them through a procedure that infects their computer with BazaLoader malware.BazaLoader creates a backdoor onto Windows machines that can be used as an initial access vector for delivering additional malware attacks – including ransomware. The notorious Ryuk ransomware is commonly delivered via BazaLoader, meaning a successful compromise by cyber criminals could have extremely damaging consequences.

    The latest BazaLoader campaign is based around human interaction and an intricate attack chain that decreases the chance of the malware being detected.SEE: Network security policy (TechRepublic Premium)Detailed by cybersecurity researchers at Proofpoint, the first stage of the campaign involves the distribution of tens of thousands of phishing emails claiming to come from ‘BravoMovies’ – a fake video-streaming service made-up by cyber criminals.The website looks convincing and those behind it have even made fake movie posters by using open-source images available online – although the way the website contains various spelling errors could hint that something isn’t right if the visitor looks carefully.The email claims the victim signed up for a trial period and they’ll be charged $39.99 a month – but that supposed subscription can be cancelled if they call a support line.

    If the user calls the number they’re connected to ‘customer service’ representative who’ll claim to guide them through the process of unsubscribing – but what they’re actually doing is telling the unwitting victim how to install BazaLoader on their computer.They do this by guiding the caller to a “Subscribtion” page, where part of the process encourages them to click a link that downloads a Microsoft Excel spreadsheet. This document contains macros, which if enabled, will secretly download BazaLoader onto the machine, infecting the victim’s PC with malware.While this takes more hands-on effort by the attackers, directing users towards a payload away from the initial phishing email makes the malware more difficult to detect during the download and installation process.”Malicious attachments are often blocked by threat detection software. By directing people to phone the call centre as part of the attack chain, the threat actors can bypass threat detection mechanisms that would otherwise flag its attachments as spam,” Sherrod DeGrippo, senior director of threat research and detection at Proofpoint, told ZDNet.”However, doing so significantly lowers the likelihood of a victim engaging with the content and takes more time and effort on the part of the threat actors.”SEE: This malware has been rewritten in the Rust programming language to make it harder to spotBut for the attackers, it could be that the lower risk of the attack being discovered makes the extra effort worth it in the end.”Social engineering is the key to this attack chain and threat actors depend upon their social engineering lures to cause recipients to take an action to complete the attack chain and get the malware on the target’s machine,” said DeGrippo.To help protect users – and the wider organisation – from phishing attacks and social engineering, information security teams should train users to spot and report malicious emails. It’s also worth noting that while receiving an email that claims your credit card will be charged if you don’t respond is startling, creating a sense of urgency like this is a common technique used in phishing campaigns in order to trick the user into letting their guard down and following instructions.MORE ON CYBERSECURITY More

  • in

    Criminals love cryptocurrencies. Should you?

    The irresponsible libertarian rationale for all manner of bad behavior – popular in Silicon Valley – has found its greatest expression in cryptocurrencies like Bitcoin and its ilk. The anonymity of cryptocurrencies had made ransomware a global criminal enterprise.

    Digital mirageCurrencies are traditionally a medium of exchange, a store of value, and a unit of account. Cryptocurrencies are bad at all three.Yes, they are a medium of exchange, but try to buy a house with one. Painful.As a store of value they are extremely unstable, since there is no underlying asset, such as the full faith and credit of a nation. Which directly reflects on the third use of currency – a unit of account – how do you maintain a set of books with a currency whose value is ever shifting?If you own a cryptocurrency and aren’t a criminal, you’re a speculator. To see how that ends, check out Tulip Mania.Anonymity is the problem todayGood arguments can be made for a digital global currency, although that has serious problems too, as the Euro has demonstrated. The real problem is the anonymity that enables criminals to collect multi-million dollar ransoms without fear of being tracked down and brought to justice. How is that a good thing?The bigger problem

    Suppose you are a billionaire. You’ve gotten comfortable with evaluating risks, knowing that you can deploy a phalanx of Ivy league lawyers at $1,000 an hour to make your case and hold off the law. Gosh. Untraceable digital money. No more paying mules to haul cash across national borders. Pay foreign law firms to create shell companies to hide assets – private island, superyacht, arms dealing, drugs, sex trafficking – with untraceable cash. Yum!

    Yeah, the local tax authorities might get lucky and figure out what you’ve done, but really, they don’t have the expertise. Unregulated digital currencies are empowering a whole new level of criminal.Plus, they’ve been hackedMuch of the technical interest in cryptocurrencies is due to the blockchain data structure, a supposedly unhackable storage technology that preserves, forever, the history of a particular coin. But as we’ve seen repeatedly, the blockchain may not be hackable, but the supporting infrastructure surely is. Beyond that, one of these days quantum computing will leave computer science labs and enable the decryption – and rewriting – of cryptographically protected blockchains. Sorry, Mr. Boris owns your Bitcoin, not you. It says so right here.The TakeI’ve been watching cryptocurrencies for years. I get the attraction. But it’s now clear — the Colonial pipe line shutdown is only the latest example — that they need policing. Anonymity, at least, must end.I suspect that with policing, much of the attraction will disappear. Oh well. It doesn’t matter how much technobabble surrounds a bad idea. It’s still a bad idea.Comments welcome. If you think cryptocurrencies are wonderful, please tell me, in as few words as you can manage, why. I’m listening.See also: More

  • in

    Scared, human? Emotion-detection AI meets eye-tracking technology

    One of the hallmarks of the human condition is empathy. So what happens when machines become adept at knowing how you’re feeling?

    That reality is closer than ever after an eye-tracking AI company has agreed to acquire emotion-detection software firm. Smart Eye and Affectiva announced that they have entered into an agreement in which Smart Eye will acquire Affectiva for $73.5 million.The acquisition is a bid to secure a dominant position in the automotive industry, which has been aggressively adopting emotion-detection AI to improve the driving experience. Both companies have made inroads with OEM manufacturers (Smart Eye has contracts with 12 of the 20 largest global OEMs), but the space is increasingly competitive as Driver Monitoring Systems (DMS) become standard on high-end models and increasingly make their way into mid-market offerings.”As we watched the DMS category evolve into Interior Sensing, monitoring the whole cabin, we quickly recognized Affectiva as a major player to watch,” says Martin Krantz, Founder and CEO of Smart Eye. “Affectiva’s pioneering work in establishing the field of Emotion AI has served as a powerful platform for bringing this technology to market at scale. At the end of the day, this is about saving lives and bridging the gap between humans and machines. In the future, looking back at this moment in time, I am convinced that this is a decisive moment for road safety thanks to the announcement that we have made today.”Affectiva spun out of MIT Media Lab in 2009 with a goal of advancing machine learning and computer vision to gain a deep, human-centric understanding of how people are feeling, useful data for car companies.But if automotive is where the money is at the moment, there’s every chance the technology could quickly spread to adjacent industries. Affectiva already has a booming media analytics business and is used by 70 percent of the world’s largest advertisers, according to a spokesperson. Smart Eye is similarly used in human factors research to better understand human behavior. It’s not far-fetched to imagine the technology playing a crucial role in the rise of social robots in the near future.”We are thrilled to be merging with Smart Eye as the next step in Affectiva’s journey. This is a unique and exciting opportunity for us to join Smart Eye in bringing to market advanced AI with more comprehensive capabilities than either of us could provide alone,” says Dr. Rana el Kaliouby, Co-Founder and CEO of Affectiva. “Not only are our technologies very complementary, so are our values, our teams, our culture, and perhaps most importantly, our vision for the future. We share a conviction that the AI we are building now will one day become ubiquitous. It will be built into the fabric of the technologies we use in our daily lives and will forever change the way we interact with technology and each other in a digital world.”

    Even if you keep your emotional cards close to the vest, AI-enabled technology may soon have your number. More

  • in

    Big changes to 1Password in the browser as it adds biometric unlocking

    Popular password manager 1Password has updated its browser extension to enable support for Apple’s Touch ID and Microsoft’s Windows Hello biometric authentication. Biometric authentication has arrived in 1Password’s browser extension, which reached version 2.0. The company notes that users will need to have the 1Password desktop application installed for biometric authentication to work. But support for Touch ID on Macs with it and Windows Hello on Windows 10 PCs should speed up the unlocking experience. 

    ZDNet Recommends

    The new biometric authentication feature for the browser follows 1Password’s release of the 1Password app for Linux systems earlier this month. The app’s backend was written in well-liked programming language Rust.The browser biometrics feature is also available for Linux biometrics systems. 1Password’s Linux app was made available for Debian, Ubuntu, CentOS, Fedora, Arch Linux, and Red Hat Enterprise Linux. 1Password has also introduced dark mode for its extension in the browser to help users working at night. Dark mode has been applied to the 1Password popup and its on-page suggestions. The company also developed a new save experience for the password manager when adding new online accounts. The change is designed to make it easier to create, save and update logins inside the browser. The save window now displays everything that will be added to a new account item and allows users to adjust the contents and add tags. Also, handily, its password generator suggests passwords that fit the password requirements of the site the user is on.  

    SEE: Ransomware just got very real. And it’s likely to get worse1Password’s release notes for its 2.0 extension list 55 changes to the extension. Some other handy features include that Linux users can download file attachments created with 1Password. There are lots of UI tweaks to improve the experience, a few bug fixes, faster pop-up load times, QR code detection for the Epic Games website and password filling fixes for specific websites.   More

  • in

    Fake human rights organization, UN branding used to target Uyghurs in ongoing cyberattacks

    United Nations (UN) branding is being abused in a campaign designed to spy on Uyghurs.  On Thursday, Check Point Research (CPR) and Kaspersky’s GReAT team said that the campaign, likely to be the work of a Chinese-speaking threat actor, is focused on Uyghurs, a Turkic ethnic minority found in Xinjiang, China.  Potential victims are sent phishing documents branded with the United Nations’ Human Rights Council (UNHRC) logo. Named UgyhurApplicationList.docx, this document contains decoy material relating to discussions of human rights violations.  However, if the victim enables editing on opening the file, VBA macro code then checks the PC’s architecture and downloads either a 32- or 64-payload.  Dubbed “OfficeUpdate.exe,” the file is shellcode that fetches and loads a remote payload, but at the time of analysis, the IP was unusable. However, the domains linked to the malicious email attachment expanded the investigation further to a malicious website used for malware delivery under the guise of a fake human rights organization.The “Turkic Culture and Heritage Foundation” (TCAHF) domain claims to work for “Tukric culture and human rights,” but the copy has been stolen from opensocietyfoundations.org, a legitimate civil rights outfit. This website, directed at Uyghurs seeking funding, tries to lure visitors into downloading a “security scanner” prior to filing the information required to apply for a grant. However, the software is actually a backdoor. 

    The website offered a macOS and Windows version but only the link to the latter downloaded the malware.  Two versions of the backdoor were found; WebAssistant that was served in May 2020, and TcahfUpdate which was loaded from October. The backdoors establish persistence on victim systems, conduct cyberespionage and data theft, and may be used to execute additional payloads.  Victims have been located in China and Pakistan in regions mostly populated by Uyghurs. CPR and Kasperksy say that while the group doesn’t appear to share any infrastructure with other known threat groups, they are most likely Chinese-speaking and are still active, with new domains registered this year to the same IP address connected to past attacks.  “Both domains redirect to the website of a Malaysian government body called the “Terengganu Islamic Foundation”,” the researchers say. “This suggests that the attackers are pursuing additional targets in countries such as Malaysia and Turkey, although they might still be developing those resources as we have not yet seen any malicious artifacts associated with those domains.” Previous and related coverage Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Better long-term ROI pushed NBN to replace G.Fast FttC with full fibre lead-ins

    Image: APH
    NBN CEO Stephen Rue has explained why the company responsible for the National Broadband Network has reversed its previous plans to use G.fast for speeds higher than 250Mbps, and instead announced earlier this month that users would get full fibre upgrades. In the current fibre to the curb (FttC) footprint, due to the company wanting to have a diversity of suppliers, half of FttC connections can only be used with VDSL, while the other half can do VDSL and G.Fast, Rue explained. “If we’re going to provide higher speeds beyond 100Mbps to people, we wanted to look at the long-term, the 15-year roadmap if you like,” he told Senate Estimates on Thursday. “When we looked at it, we took the view that we’re using G-Fast, there would still be things like copper remediation, there may be still some home wiring in the home, and it was also going to be IT system builds for us and the retailers, and a harder thing for retailers to manage because they’d have to explain what service they were getting. “So we concluded that the best return on investment for those customers who wanted more than a 100Mbps was to provide a fibre lead-in.” Rue said it was not a complete abandonment of the technology, and there could be situations where NBN would use it, but retailers would not need to know it was in use. Another factor in favour of full fibre was in the FttC footprint, NBN had already pulled fibre down the street, so it was already close to the premises. “G.Fast absolutely could have delivered, certainly at 250 megs and above,” Rue said. “There’s no technical issue with G.Fast.”

    The CEO would not be drawn on how many fibre lead-ins NBN was expecting to build, and asked that the question be deferred to the next set of hearings that would be after the company released its next corporate plan. In response to questions on notice, NBN said it had replaced approximately 47,700 NBN Co Connection Devices (NCDs) used on FttC connections across November to March as a result of severe weather. The company further said, during 2020, it swapped 57,000 NCDs and so far this year it has replaced 44,300 NCDs. In March, the company said it was looking for a long-term solution to lightning frying FttC equipment, which was highlighted in the Blue Mountains area of NSW. On Thursday, Rue revealed NBN had issued around half a million devices that might go pop if struck by lightning, or lightning landed nearby. “Investigations have found that in some instances a component in the device does not meet our design specifications and while these devices are failing in a safe way, it is happening more often than expected in selected areas causing a reliability issue for some customers,” Rue said. “In these areas, when a device does fail due to electrical over-stress, we are now replacing it with an NCD that meets our original design specifications, and is much less likely to fail again under similar circumstances. “Approximately 500,000 devices with the component issue were deployed into the network as the issue coincided with our peak FttC deployment period.” Although the root cause has yet to be established, Rue said the company is implementing a “fast-track” solution that aims to detect a failed device and have a replacement device shipped to customers in 24 hours. Amongst a bevy of questioning about its prime contracting arrangements, NBN revealed the issue with its technician workload software when it launched in New South Wales: It got overloaded. “What happened, when literally it was rolled out in New South Wales, the platform went down and we then had, due to literally the doubling of our workforce on the system, we then add the issues around the functionality where it wasn’t syncing properly, so therefore it caused a poor experience,” COO Kathrine Dyer said. The app uses ServiceMax that sits on top of the Salesforce platform, and it has a ServiceNow functionality in relation to scheduling and how work is fed into the ServiceMax tool, the COO said. Dyer said the software was hit by a trio of factors: A two-day platform outage that hit NBN and technicians; it wasn’t syncing; and updating its functionality. “We were getting agile-based feedback from the sub-contractors in relation to the usability, and we were working with them, based on the feedback we were getting, to streamline the usability as we were rolling out the app,” Dyer said. Related Coverage More

  • in

    Human Rights Commission calls for a freeze on 'high-risk' facial recognition

    Image: Getty Images
    The Australian Human Rights Commission (AHRC) has called for stronger laws around the use facial recognition and other biometric technology, asking for a ban on its use in “high-risk” areas.The call was made in a 240-page report [PDF] from the AHRC, with outgoing Human Rights Commissioner Edward Santow saying Australians want technology that is safe, fair, and reliable, and technology that with the right settings in law, policy, education, and funding, the government, alongside the private sector, can “build a firm foundation of public trust in new technology”.”The use of AI in biometric technology, and especially some forms of facial recognition, has prompted growing public and expert concern,” the report says.Must read: Facial recognition tech is supporting mass surveillance. It’s time for a ban, say privacy campaignersAs a result, the Commission recommends privacy law reform to protect against the “most serious harms associated with biometric technology”. “Australian law should provide stronger, clearer, and more targeted human rights protections regarding the development and use of biometric technologies, including facial recognition,” it wrote. “Until these protections are in place, the Commission recommends a moratorium on the use of biometric technologies, including facial recognition, in high-risk areas.”

    The report details a number of concerns raised throughout the AHRC’s consultation on the use of biometrics, such as the risk of profiling and errors leading to the risk of discrimination, including bias against people of colour, as well as a blanket concern over mass surveillance.The AHRC has made a number of recommendations as a result, with the first asking federal, state, and territory governments to introduce legislation that regulates the use of facial recognition and other biometric technology. The legislation, it said, should expressly protect human rights; apply to the use of this technology in decision making that has a legal, or similarly significant, effect for individuals, or where there is a high risk to human rights, such as in policing and law enforcement; and be developed through in-depth consultation with the community, industry, and expert bodies such as the AHRC and the Office of the Australian Information Commissioner (OAIC).”To date, existing legislation has not proven to be an effective brake on inappropriate use of facial and other biometric technology,” the report says. “Without effective regulation in this area, it seems likely that community trust in the underlying technology will deteriorate.”It has urged all governments across the country to work together.The AHRC has asked the moratorium on the use of facial recognition and other biometric technology in decision making — which has a legal, or similarly significant, effect for individuals, or where there is a high risk to human rights, such as in policing and law enforcement — be continued until such time as legislation is in place.The moratorium, however, would not apply to all uses of facial and biometric technology. “Particular attention should be given to high-risk contexts, such as the use of facial recognition in policing, in schools and in other areas where human rights breaches are more likely to occur,” it adds.It also said the government should introduce a statutory cause of action for serious invasion of privacy where biometrics are concerned.Calling for a modernised regulatory system to ensure that AI-informed decision making is “lawful, transparent, explainable, responsible, and subject to appropriate human oversight, review, and intervention”, the AHRC has also requested the creation of a new AI Safety Commissioner to help lead Australia’s transition to an “AI-powered world”. Desirably operating as an independent statutory office, the AI Safety Commissioner should focus on promoting safety and protecting human rights in the development and use of AI in Australia, such as through working with regulators to build technical capacity regarding the development and use of AI in their respective areas, as well as be responsible for monitoring and investigating developments and trends in the use of AI.See also: Ethics of AI: Benefits and risks of artificial intelligence It has also asked the government to convene a multi-disciplinary taskforce on AI-informed decision making that could perhaps be led by the AI Safety Commissioner.”The taskforce should consult widely in the public and private sectors, including with those whose human rights are likely to be significantly affected by AI-informed decision making,” it said.The report has also asked the government resource the AHRC accordingly so that it can produce guidelines for how to comply with federal anti-discrimination laws in the use of AI-informed decision making.To that end, another recommendation is that the government introduce legislation to require that a human rights impact assessment (HRIA) be undertaken before any department or agency uses an AI-informed decision-making system to make administrative decisions, as well other legislation that requires any affected individual to be notified when AI is materially used in making an administrative decision.It has also asked for an audit on existing, or proposed, AI-informed decision making.Making a total of 38 recommendations, the AHRC also touches on legal accountability for private sector use of AI, asking the legislation flagged for government use of AI also be extended to non-government entities. Elsewhere, it has asked the Attorney-General develop a Digital Communication Technology Standard under section 31 of the Disability Discrimination Act 1992 and consider other law and policy reform to implement the full range of accessibility obligations regarding Digital Communication Technologies under the Convention on the Rights of Persons with Disabilities. Additionally, it wants federal, state, territory, and local governments to commit to using digital communication technology that fully complies with recognised accessibility standards.”We need to ask a crucial question: Can we harness technology’s positive potential to deliver the future we want and need, or will it supercharge society’s worst problems? The decisions we make now will provide the answer,” Santow said. He labelled the report as setting out a roadmap for achieving this goal.SEE ALSO More