More stories

  • in

    Twitter’s new strike system will target prolific COVID-19 fake information spreaders

    Twitter is set to introduce a strike system to remove repeat spreaders of COVID-19 vaccine misinformation from the platform. 

    On Monday, Twitter said that alongside removing thousands of tweets and examining over 11.5 million accounts linked to fake information on the microblogging platform, the company will now start applying labels to tweets “that may contain misleading information about COVID-19 vaccines.”
    This system is similar to one already imposed by Facebook, which has also adopted a targeted misinformation approach based on user locations and measuring attitudes to topics including vaccinations and mask-wearing worldwide. 
    Twitter will first use human employees to make the decisions over whether tweets violate company policy, and these assessments will then be used to train automated tools and algorithms to detect misinformation. 
    The firm intends to eventually use “both automated and human review to address content that violates our COVID-19 vaccine misinformation rules.”
    Persistent spreads of fake COVID-19 vaccine content will receive a ‘strike’. While this won’t deter bots, Twitter hopes the system will “educate” users “on why certain content breaks our rules so they have the opportunity to further consider their behavior and their impact on the public conversation.”
    Twitter will alert users when they receive a strike, and after two, a 12-hour account lock will be applied. After three strikes, another 12-hour ban will be imposed, and after four, users will be unable to access their account for a week. 

    Five strikes or more will be punished by permanent suspension. However, users do retain the right to appeal. 
    In addition to introducing the strike system, Twitter has debuted a COVID-19 search prompt to push up results from official sources including health organizations; free non-profit advertising, and ongoing collaboration with the World Health Organization (WHO). 
    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Australia's new 'hacking' powers considered too wide-ranging and coercive by OAIC

    The Office of the Australian Information Commissioner (OAIC) has labelled the powers given to two law enforcement bodies within three new computer warrants as “wide-ranging and coercive in nature”.
    The Surveillance Legislation Amendment (Identify and Disrupt) Bill 2020, if passed, would hand the Australian Federal Police (AFP) and the Australian Criminal Intelligence Commission (ACIC) the new warrants for dealing with online crime.
    The first of the warrants is a data disruption one, which according to the Bill’s explanatory memorandum, is intended to be used to prevent “continuation of criminal activity by participants, and be the safest and most expedient option where those participants are in unknown locations or acting under anonymous or false identities”.
    The second is a network activity warrant that would allow the AFP and ACIC to collect intelligence from devices that are used, or likely to be used, by those subject to the warrant.
    The last warrant is an account takeover warrant that would allow the agencies to take control of an account for the purposes of locking a person out of the account.
    See also: Intelligence review recommends new electronic surveillance Act for Australia
    “The OAIC acknowledges the importance of law enforcement agencies being authorised to respond to cyber-enabled and serious crime. However, the Bill’s proposed powers are wide-ranging and coercive in nature,” it wrote [PDF].

    It said, for example, data disruption and network activity warrants may authorise entering specified premises, removing computers or data, and intercepting communications. Network activity warrants, OAIC said, can authorise the use of surveillance devices, and both data disruption and network activity warrants may authorise the concealment of certain activities done under these warrants.
    “These powers may adversely impact the privacy of a large number of individuals, including individuals not suspected of involvement in criminal activity, and must therefore be subject to a careful and critical assessment of their necessity, reasonableness, and proportionality,” its submission to the Parliamentary Joint Committee on Intelligence and Security (PJCIS) continued.
    “Further, given the privacy impact of these law enforcement powers on a broad range of individuals and networks, they should be accompanied by appropriate privacy safeguards.”
    The OAIC believes the Bill requires further consideration to better ensure that any adverse effects on the privacy of individuals which result from these coercive powers are minimised, and that additional privacy protections are included in the primary legislation.
    It also wants the Bill amended to require issuing authorities to consider the impact of the warrants on the privacy of any individual when determining applications for data disruption warrants and network activity warrants, in addition to account takeover warrants.
    Likewise, the OAIC has asked for a limit to the number of warrant extensions that can be sought in respect of the same or substantially the same circumstances and that the issuing authority be required to consider the privacy impact on any individual arising from the extension of the warrant to ensure that the potential law enforcement benefits are necessary and proportionate to this impact.
    Elsewhere, the commissioner has asked the Bill be amended to only allow for judicial oversight and authorisation of warrants issued under it.
    The chief officer of the AFP or ACIC may apply for a network activity warrant if that officer suspects on reasonable grounds that a group of individuals constitutes a “criminal network of individuals”. The OAIC believes the Bill’s definition of a criminal network of individuals has the potential to include a significant number of individuals, including third parties not the subject or subjects of the warrant who are only incidentally connected to the subject or subjects of the warrant.
    “The seriousness of this impact upon privacy requires further mitigation with commensurate safeguards,” it said. “The OAIC recommends amending the Bill to narrow the definition of ‘criminal network of individuals’.”
    Among its recommendations is the mandate for the information within denied warrants to be destroyed, as well as a requirement on agencies to consider the utility of the collected information and take active steps to destroy it when it is no longer necessary for the purposes of criminal investigations.
    MORE ON THE BILL More

  • in

    Google joins call for clarification on much of Australia's 'rushed' Online Safety Bill

    Communications Minister Paul Fletcher last week put forward Australia’s new Online Safety Bill, which the government touted would further empower the eSafety Commissioner to request the removal of harmful material from websites and social media platforms, as well as introduce minimum standards for service providers to comply with.

    The Online Safety Bill 2021 entered Parliament on Wednesday, eight business days after consultation on the draft legislation closed. Submissions made to the draft consultation are yet to be released, but Fletcher said it has received 370 submissions.
    The Bill is before the House of Representatives and was referred to the Senate Standing Committees on Environment and Communications last Thursday. Submissions to the committee close on Tuesday — three business days after it was referred — with a report from the committee due on March 11, which is two weeks after the Bill was introduced.
    The Bill contains six key priority areas: A cyberbullying scheme to remove material that is harmful to children; an adult cyber abuse scheme to remove material that seriously harms adults; an image-based abuse scheme to remove intimate images that have been shared without consent; basic online safety expectations for the eSafety Commissioner to hold services accountable; an online content scheme for the removal of “harmful” material through take-down powers; and an abhorrent violent material blocking scheme to block websites hosting abhorrent violent material.  
    The committee has made a handful of submissions to its speedy inquiry available, including from Google Australia [PDF], which re-submitted the latest copy it sent to the draft consultation, given the “abbreviated timetable for this inquiry”.
    Google raised concerns that the schemes would appear to apply to other sorts of services, such as messaging services, email, application stores, and business-to-business services that serve as providers for other hosting services.
    “Therefore, compliance with certain obligations contained within the Bill will be challenging if not impossible for Google’s Cloud business due to technical limitations on how Google can and should moderate business client content,” it wrote. “Similar challenges would exist within, for instance, app distribution platforms like Google Play. There, too, the app platform operator does not have the ability to remove individual pieces of content from within an app.”

    Among many other concerns, it has also taken issue with the Bill’s defined takedown period, which proposes to halve the current 48-hour period to 24 hours.
    It said specifying an exact turnaround time, regardless of case complexity, would provide an incentive for companies to over-remove, thereby silencing political speech and user expression.
    Electronic Frontiers Australia (EFA) is similarly concerned with the Bill. It said it was deeply troubled with the rush to accumulate new power concentrated in few hands and subject to little oversight or review.
    “Authorities’ failure to enforce existing laws is frequently used to justify new powers that can be used ‘more efficiently’ which in practice means it will be done with less oversight and with fewer safeguards against abuse,” a submission penned by EFA board member and PivotNine founder and chief analyst Justin Warren said.
    “Power over others should be difficult to use. This difficulty provides an inbuilt safeguard against abuse which is necessary because all power is abused, sooner or later.
    “Australia is rushing to construct a system of authoritarian control over the population that should not be welcomed by a liberal democracy. It is leading Australia down a very dark path.”
    Among other recommendations, the EFA asked the Bill’s introduction be delayed until after a federal enforceable human rights framework is introduced into Australian law.
    Part of the Bill provides that the eSafety Commissioner may obtain information about the identity of an end-user of a social media service, a relevant electronic service, or designated internet service; another part also provides the commissioner with investigative powers, which includes a requirement that a person to provide “any documents in the possession of the person that may contain information relevant”.
    As a result, the Australian Digital Rights Watch is concerned that it is possible the commissioner’s information-gathering and investigative powers would extend to encrypted services.
    It has asked for additional clarification of the scope of these powers, along with a clear indication that providers are not expected to comply with a notice if it would require them to decrypt private communications channels or build systemic weaknesses to comply.
    Making its views on the Bill public via its own website, Digital Rights Watch said the Bill introduces provisions for powers that are likely to undermine digital rights and exacerbate harm for vulnerable groups.
    The online content scheme, Digital Rights Watch said, is likely to cause significant harm to those who work in the sex industry, including sex workers, pornography creators, online sex-positive educators, and activists.
    The abhorrent content blocking scheme, which comes in direct response to the Christchurch terrorist attack, is considered overly simplistic by the group.
    “In some circumstances, violence captured and shared online can be of vital importance to hold those in power accountable, to shine the light on otherwise hidden human rights violations, and be the catalyst for social change,” it wrote, pointing specifically to the video of George Floyd’s death.
    “Simply blocking people from seeing violent material does not solve the underlying issues causing the violence in the first place and it can also lead to the continuation of violence behind closed doors, out of sight from those who might seek accountability. It is essential that this scheme not be used to hide state use of violence and abuses of human rights.”
    The organisation said when automated processes such as AI are used to determine which content is or isn’t harmful, it has been shown to disproportionately remove some content over others, penalising Black, Indigenous, fat, and LGBTQ+ people.  
    “While the goal of minimising online harm for children is vital to our communities, we must acknowledge that policing the internet in such broad and simplistic ways will not guarantee us safety and will have overbroad and lasting impacts across many different spaces,” Digital Rights Watch said.
    Submissions close today and a hearing is scheduled for the committee on Friday.
    HERE’S MORE More

  • in

    SolarWinds security fiasco may have started with simple password blunders

    We still don’t know just how bad the SolarWinds security breach is. We do know over a hundred US government agencies and companies were cracked. Microsoft president Brad Smith said, with no exaggeration, that it’s “the largest and most sophisticated attack the world has ever seen,” with more than a thousand hackers behind it. But former SolarWinds CEO Kevin Thompson says it may have all started when an intern first set an important password to “‘solarwinds123.” Then, adding insult to injury, the intern shared the password on GitHub.

    ZDNet Recommends

    You can’t make this stuff up.
    Also: Best password manager in 2021
    Thompson told a joint US House of Representatives Oversight and Homeland Security Committees hearing that the password was “a mistake that an intern made. They violated our password policies and they posted that password on an internal, on their own private Github account. As soon as it was identified and brought to the attention of my security team, they took that down.”
    Rep. Katie Porter, Democrat from California, rejoined, “I’ve got a stronger password than ‘solarwinds123′ to stop my kids from watching too much YouTube on their iPad.”
    How long did it actually take SolarWinds to replace the lousy password? Too long. 
    While SolarWinds executives said it was fixed within days of its discovery, current SolarWinds CEO Sudhakar Ramakrishna confessed that the password has been in use by 2017. Vinoth Kumar, the security researcher who discovered the leaked password had said SolarWinds didn’t fix the issue until November 2019. 

    Almost two years is too long to leave an important password to go stale. You also have to wonder what an intern was doing setting a significant password in the first place.  
    While SolarWinds isn’t sure that this password is the hole in the dyke that Russian hackers used to flood into American systems, it’s a safe bet that a security culture that enabled such a basic mistake couldn’t have helped.
    Also: Better than the best password: How to use 2FA to improve your security
    Looking ahead, Smith suggested to the US Senate that in the future the Federal government should impose a “notification obligation on entities in the private sector.” All too often no one knows about corporate security breaches until they’ve blown up the way SolarWinds’ failure did. Smith agreed that isn’t “a typical step when somebody comes and says, ‘Place a new law on me,'” but “I think it’s the only way we are going to protect the country.”   In the meantime, as security company FireEye CEO Kevin Mandia said at the House hearing, “The bottom line: We may never know the full range and extent of the damage, and we may never know the full range and extent as to how the stolen information is benefiting an adversary.”
    That said, Mandia added, “I’m not convinced compliance in any standards regulation or legislation would stop Russian Foreign Intelligence Service from successfully breaching the organization.” 
    Related Stories: More

  • in

    Singapore eyes more cameras, technology to boost law enforcement

    Singapore is looking to expand its use of cameras and technology to better support law enforcers and first responders. These include plans to tap sensors, video analytics, artificial intelligence (AI), automation, and drones to ease manpower shortages and improve service efficiencies. 
    As it is, the police have deployed almost 90,000 cameras in public locations such as carparks and residential estates across the island. And “many more” will be rolled out in the coming years, according to Minister for Home Affairs and Minister for Law K. Shanmugam, who was speaking in parliament Monday. 
    Describing these cameras as “a game-changer” in deterring and investigating crimes, he said the devices had helped the police solve 4,900 cases as of December 2020. 

    Singapore puts budget focus on transformation, innovation
    After tilting last year’s budget towards ’emergency support’ in light of the global pandemic, Singapore’s government will spend SG$24 billion ($18.1 billion) over the next three years to help local businesses innovate and build capabilities needed to take them through the next phase of transformation.
    Read More

    Shanmugam noted that there were limits to resources and manpower, and his ministry had focused on transformation with increased use of technology to address the shortage. 
    Neighbour police centres and police posts, for instance, had been redesigned to include automated self-help kiosks, so citizens could police services 24 by 7, he said. 
    Some 300 next-generation Fast Response Cars also would hit the roads by 2023, equipped with cameras capable of providing a 360-degree view of their surroundings back to the Police Command Centre. This would enable agents at the command centre to assess the situation and deploy backups, he said. The vehicles also would be armed with video analytics technology to read number plates and automatically flag vehicles of interest. 
    “So you will be surrounded by sensors, which make people feel safer and more confident,” the minister said. 

    In addition, the police had been trialling beacon prototypes for a year, enabling the public to contact law enforcements directly during emergencies. Located across two residential estates, these beacons were equipped with various capabilities to “create deterrence and project presence”, he said, adding that they also had CCTV cameras to allow the police to assess the situation quickly. 
    Beyond the law, efforts were underway to build “smart” fire stations that would make greater use of sensors and automation to facilitate operational response, decision making, and manpower management. Manual processes such as tracking the readiness of emergency supplies, vehicles, and personnel rostering would be automated, said Shanmugam. 
    An AI-powered system also would send information during an emergency, such as a building’s floor plans and on-site live video feed, to officers before they arrived at the location. This would enable them to better assess the situation, develop a plan more quickly, and improve their response. 
    Emergency first responders also would have smart wearables that were integrated with the smart fire station’s systems, enabling commanders to monitor their officers’ physical condition during operations and training. 
    Moving to immigration control, Shanmugam said further enhancements would be made to verify travellers’ identities through iris and facial images at automated lanes, bypassing the use of passports and thumbprints. Trials were underway and showing promising results, he added.
    He also pointed to the use of drones and robots to facilitate security operations at COVID-19 isolation facilities, which reduced the risk of exposure for frontline officers.
    Robots also had been tapped to fight fire, including at an industrial fire last March where they tackled the most dangerous parts of the fire, fraught with immense heat and poor visibility, he noted.
    RELATED COVERAGE More

  • in

    Scientists have built this ultrafast laser-powered random number generator

    A new light-based system could be used to generate the cryptography keys that secure highly sensitive data and transactions.  
    Image: Kyungduk Kim/ University of Yale
    Using a single, chip-scale laser, scientists have managed to generate streams of completely random numbers at about 100 times the speed of the fastest random-numbers generator systems that are currently in use.  
    The new system, which is described as “massively parallel ultrafast random bit generation,” could be used to generate the cryptography keys that secure highly sensitive data and transactions, which are currently at risk of attack from hackers armed with ever-increasing computer power.  

    Randomness has a fundamental role to play in cryptography: the more random a security key is, the harder it is to use logical mathematics to crack the code. This is why random numbers generators are used to encrypt data: the technology creates streams of bits that can in turn be used to produce very strong cryptography keys.  
    There are many ways to generate random numbers, the most well-known of which can be traced back over thousands of years: for instance, a simple dice, or coin-flipping, provide unpredictable results. This is what modern cryptography is attempting to emulate. 
    Of course, manual random number generation is incapable of keeping pace with the scale of demand for data security. To create large amounts of random numbers at scale, new technologies were developed to quickly translate into bits, or numbers, the unpredictable behavior of some natural phenomena.  
    Lasers, for example, are made of tiny quantum photons that behave in a chaotic, unpredictable manner – and the random fluctuations of the particles that make up a laser beam can be detected by a computer, to be translated into sequences of numbers that are completely non-deterministic.  
    Although the unpredictable properties of lasers have been used to generate random numbers before, those systems are limited. Laser-based systems aren’t capable of producing many numbers very fast, nor can they generate numbers simultaneously from a single beam. 

    “Usually, those physical random number generators are not very fast – that’s one problem,” said Hui Cao, professor of applied physics at Yale University, who led the study. “Also, they are sequential – that is, they usually just generate one bitstream. They cannot generate many bitstreams simultaneously. And in each stream, the rate is relatively low, so that prevents it from generating a lot of random numbers very quickly.” 
    At the same time, the need for a system that can produce random numbers at scale is fast increasing. As networks expand in an ever-connected way, it is becoming necessary to increase the generation rate of random numbers to keep pace with demand, and make sure that sensitive data is appropriately protected. 
    To improve the output of laser-based random number generators, Cao and her team created a compact single laser, and tweaked the design of the laser cavity to make it resemble an hourglass. When the laser is shined, light waves ricochet between either end of the hourglass, simultaneously resonating in the device; the fluctuations in the intensity of the quantum particles of light are recorded by a fast camera, to be translated by a computer into random series of numbers.  
    Thanks to the new design, therefore, the cavity acts as a resonator for the light waves, meaning that random bits can be generated in parallel, even with a single laser diode – a first, for light-based random number generators. 
    The results are promising, both in speed and scale: using the new amplifying system, Cao and her team generated about 250 terabits, or 250,000 gigabits, or random bits per second, which is more than two orders of magnitude higher than the fastest current systems. The researchers said that the technology can also be scaled up “significantly”. 
    “It really opens a new avenue on how to generate random numbers much faster, and we have not reached the limit yet,” said Cao. “As to how far it can go, I think there’s still a lot more to explore.” 
    For the technology to be ready for practical use, however, it will be necessary to create a compact chip that incorporates both the laser and the photodetectors that could directly and rapidly send measurements to computers in real-time.  
    With many companies looking at innovative ways to leverage light particles for random number generation, the field is likely to be busy in the next few years.  
    UK-based quantum company Nu Quantum, for example, is working on a device that can emit and detect quantum particles of light, called single photons. In the long term, Nu Photon’s founders hope that the technology will be used to build large-scale quantum computers; for now, however, the start-up is working with the National Physical Laboratory to commercialize the device for quantum random number generation.  More

  • in

    Free cybersecurity tool aims to help smaller businesses stay safer online

    Small businesses can receive bespoke advice on how to improve their cybersecurity and protect their networks from malicious hackers and cyber crime via a new tool from the National Cyber Security Centre (NCSC).
    The ‘Cyber Action Plan’ is a free online service designed to help small businesses protect themselves against cyber attacks.
    While smaller businesses might not believe they’re a tempting target for cyber criminals, almost half have reported cybersecurity breaches or attacks over the last year. That figure is up from under a third of SMBs reporting incidents during the previous twelve months.
    For cyber criminals, while targeting smaller businesses might not be as lucrative as campaigns targeting larger businesses, the potential lack of cybersecurity barriers could provide them with easy pickings. The attacker could always be targeting a small business as part of a supply chain attack against a larger target anyway.
    SEE: What is cyber insurance? Everything you need to know about what it covers and how it works
    The NCSC’s Cyber Action Plan tool aims to help small businesses improve their resilience to cyber attacks via the aid of a short questionnaire about their current cybersecurity strategy and provides customised advice on how the business could be better protected against cyber crime.
    Some of the potential recommendations include building a backup strategy and regularly updating those backups, using a strong password and multi-factor authentication, as well as making sure that software updates and security updates are regularly applied.

    SEE: Network security policy (TechRepublic Premium)
    By applying relatively simple cybersecurity procedures like these, small businesses can go a long way towards protecting themselves from falling victim to data breaches, malware, ransomware and other cyber attacks.
    “Small businesses are the lifeblood of this country, but we know they can be a target for cyber criminals, particularly as they move more operations online,” said Sarah Lyons, deputy director for economy and society at the NCSC.
    “Our free Cyber Action Plan is here to help, offering bespoke, actionable information linked to the Cyber Aware behaviours. If you work for yourself, or run a small business, I would urge you to spend a few minutes on the questionnaire and follow the steps to help secure your business,” she added.
    The action plan is the latest in a line of tools and initiatives by the NCSC designed to help protect businesses and individuals from falling victim to cyber attacks – or knowing what to do if they do become a victim of cyber crime.
    The NCSC will be launching a version of the cyber action plan designed to help individuals and families protect themselves from cyber attacks at some point in the future.
    MORE ON CYBERSECURITY More

  • in

    Google: Bad bots are on the attack, and your defence plan is probably wrong

    Google is warning that bots are causing more problems for business — but many companies are only focused on the most obvious attacks.
    At the outset of the COVID-19 pandemic Microsoft chief Satya Nadella said Microsoft had seen “two years’ worth of digital transformation in two months.” Google now sees that attackers have adapted to these changed conditions and are boosting attacks on newly online businesses, with bots high on the list of tools used. 
    Bot attacks can cover anything from web scraping where bots are used to gather content or data, to bots that try to beat Captchas, to ad fraud, card fraud and inventory fraud. Of particular concern are distributed denial of service attacks (DDoS), where junk traffic is directed at an online service with the purpose of flooding it to the point of knocking it offline. 

    ZDNet Recommends

    According to the advertising giant, 71% of companies experienced an increase in the number of successful bot attacks, and 56% of companies reported seeing different types of attacks, but it said many companies are using the wrong mix of technology to protect themselves.
    Google’s research has found that while 78% of organizations are using DDoS protection, such as web application firewalls, and content distribution networks (CDN), less than a fifth of them are using a “full bot management system”. 
    “Bots attack an application’s business logic, and only a bot management solution can protect against that sort of threat,” says Google cloud platform’s Kelly Anderson, a product marketing manager. 
    “To effectively safeguard web applications from bot attacks, organizations must use tools like DDoS protection, WAF, and/or CDNs, alongside a bot management solution.”

    According to Anderson, there’s a missing link between application security and security operations teams and e-commerce, fraud, and network security pros, which allows for bots to pose a threat to business operations. 
    “Effective bot management relies on collaboration between many teams within an organization, including security, customer experience, e-commerce, and marketing. But on average, only two teams are involved in bot management, usually the application security and security operations teams. Yet, it’s the e-commerce, fraud, and network security professionals that most commonly consume the data from bot management tools. This disconnect can lead to the commerce or fraud teams being left out of critical bot management decisions,” she explains. 
    Because of this disconnection between security and anti-fraud teams, firms spend  53 working days — or nearly two months — across roles resolving attacks.
    Anderson wants businesses to invest in a bot management system that can detect the most sophisticated bots. 
    “Good automated traffic comes from approved partner applications and search engines, while bad traffic comes from malicious bot activity. Bots account for over half of all automated web traffic and nearly a quarter of all internet traffic in 2019, leaving professionals to thread the needle,” Google says in a research paper. 
    Google commissioned the research to analyst firm Forrester Consulting, which looked at bot management approaches. The survey gained 425 respondents with responsibilities over fraud management, attack detection and response, and the protection of user data.
    The company found that most organizations are only protecting themselves on card fraud, ad fraud, and influence fraud attacks. 
    “Only 15% of businesses are currently protecting themselves against web scraping attacks, yet 73% face such an attack on a weekly basis,” Forrester Consulting says. 
    Almost two-thirds of respondents said they lost between 1% and 10% of revenue to web scraping attacks alone. 
    “Many businesses focus on the types of attacks that are mostly commonly in the news, rather than the attacks that can cause the most damage to their bottom lines,” the consulting firm says.  More