More stories

  • in

    Singapore to develop mobile defence systems with Ghost Robotics

    Singapore’s Defence Science and Technology Agency (DSTA) has inked a partnership with Philadelphia-based Ghost Robotics to identify uses cases involving legged robots for security, defence, and humanitarian applications. They will look to test and develop mobile robotic systems, as well as the associated technology enablers, that can be deployed in challenging urban terrain and harsh environments.The collaboration also would see robots from Ghost Robotics paired with DSTA’s robotics command, control, and communications (C3) system, the two partners said in a joint statement released Thursday. The Singapore government agency said its C3 capabilities were the “nerve centre” of military platforms and command centres, tapping data analytics, artificial intelligence, and computer vision technologies to facilitate “tighter coordination” and effectiveness during military and other contingency operations. Its robotics C3 system enabled simultaneous control and monitoring of multiple unmanned ground and air systems to deliver a holistic situation outline for coordinated missions, including surveillance in dense urban environments. With the partnership, DSTA and Ghost Robotics would test and develop “novel technologies and use cases” for quadrupedal unmanned ground vehicles, which would be integrated with multi-axis manipulators. These would enhance how the autonomous vehicles interacted with their environment and objects within it. Power technologies, such as solid-state batteries or fuel cells, also would be integrated to allow the robotics systems to operate for extended periods of time. DSTA’s deputy chief executive for operations and director of land systems, Roy Chan, said: “In the world of fast-evolving technology, close collaboration between organisations is imperative to co-create use cases and innovative solutions. In partnering Ghost Robotics, DSTA hopes to advance robotic capabilities in defence and shape the battlefield of the future.

    “We envision that robots would one day become a defender’s best friend and be deployed to undertake more risky and complex operations in tough terrains,” Chan said. DSTA is tasked with tapping science and technology to develop capabilities for the country’s Singapore Armed Forces (SAF), including the use of autonomous vehicles. The Ministry of Defence and SAF in June 2021 unveiled a transformation strategy to address evolving security challenges and threats, which encompassed efforts to leverage technological advancements to better tap data and new technologies, such as robotics C3 systems, and integrate these technologies into warfighting concepts to improve operational effectiveness and reduce manpower requirements.According to Ghost Robotics, its quadrupedal unmanned ground vehicles were built for unstructured terrain, on which a typical wheeled or tracked device could not operate efficiently. RELATED COVERAGE More

  • in

    7-Eleven breached customer privacy by collecting facial imagery without consent

    Image: Getty Images
    In Australia, the country’s information commissioner has found that 7-Eleven breached customers’ privacy by collecting their sensitive biometric information without adequate notice or consent. From June 2020 to August 2021, 7-Eleven conducted surveys that required customers to fill out information on tablets with built-in cameras. These tablets, which were installed in 700 stores, captured customers’ facial images at two points during the survey-taking process — when the individual first engaged with the tablet, and after they completed the survey. After becoming aware of this activity in July last year, the Office of the Australian Information Commissioner (OAIC) commended an investigation into 7-Eleven’s survey. During the investigation [PDF], the OAIC found 7-Eleven stored the facial images on tablets for around 20 seconds before uploading them to a secure server hosted in Australia within the Microsoft Azure infrastructure. The facial images were then retained on the server, as an algorithmic representation, for seven days to allow 7-Eleven to identify and correct any issues, and reprocess survey responses, the convenience store giant claimed. The facial images were uploaded to the server as algorithmic representations, or “faceprints”, that were then compared with other faceprints to exclude responses that 7-Eleven believed may not be genuine. 7-Eleven also used the personal information to understand the demographic profile of customers who completed the survey, the OAIC said. 7-Eleven claimed it received consent from customers who participated in the survey as it provided a notice on its website stating that 7-Eleven may collect photographic or biometric information from users. The survey resided on 7-Eleven’s website.

    As at March 2021, approximately 1.6 million survey responses had been completed. Angelene Falk, Australia’s Information Commissioner and Privacy Commissioner, determined that this large-scale collection of sensitive biometric information breached Australia’s privacy laws and was not reasonably necessary for the purpose of understanding and improving customers’ in-store experience. In Australia, an organisation is prohibited from collecting sensitive information about an individual unless consent is provided.   Falk said facial images that show an individual’s face is sensitive information. She added that any algorithmic representation of a facial image is also sensitive information. In regards to 7-Eleven’s claim that consent was provided, Falk said 7-Eleven did not provide any information about how customers’ facial images would be used or stored, which meant 7-Eleven did not receive any form of consent when it collected the images. “For an individual to be ‘identifiable’, they do not necessarily need to be identified from the specific information being handled. An individual can be ‘identifiable’ where it is possible to identify the individual from available information, including, but not limited to, the information in issue,” Falk said. “While I accept that implementing systems to understand and improve customers’ experience is a legitimate function for 7-Eleven’s business, any benefits to the business in collecting this biometric information were not proportional to the impact on privacy.” As part of the determination, Falk has ordered for 7-Eleven to cease collecting facial images and faceprints as part of the customer feedback mechanism. 7-Eleven has also been ordered to destroy all the faceprints it collected. Related Coverage More

  • in

    Singapore must take caution with AI use, review approach to public trust

    In its quest to drive the adoption of artificial intelligence (AI) across the country, multi-ethnic Singapore needs to take special care navigating its use in some areas, specifically, law enforcement and crime prevention. It should further foster its belief that trust is crucial for citizens to be comfortable with AI, along with the recognition that doing so will require nurturing public trust across different aspects within its society.  It must have been at least two decades ago now when I attended a media briefing, during which an executive was demonstrating the company’s latest speech recognition software. As most demos went, no matter how much you prepared for it, things would go desperately wrong.  Her voice-directed commands often were wrongly executed and several spoken words in every sentence were inaccurately translated into text. The harder she tried, the more things went wrong, and by the end of the demo, she looked clearly flustered.  She had a relatively strong accent and I’d assumed that was likely the main issue, but she had spent hours training the software. This company was known, at that time, specifically for its speech recognition products so it wouldn’t be wrong to assume its technology then was the most advanced in the market.  I walked away from that demo thinking it would be near impossible, with the vast difference in accents within Asia alone and even amongst those who spoke the same language, for speech recognition technology to be sufficiently accurate. 

    Singapore wants widespread AI use in smart nation drive

    With the launch of its national artificial intelligence (AI) strategy, alongside a slew of initiatives, the Singapore government aims to fuel AI adoption to generate economic value and provide a global platform on which to develop and testbed AI applications.

    Read More

    Some 20 years later, today, speech-to-text and translation tools clearly have come a long way, but they’re still not always perfect. An individual’s accent and speech patterns remain key variants that determine how well spoken words are translated.  However, wrongly converted words are unlikely to cause much damage, safe from a potentially embarrassing moment on the speaker’s part. The same is far from the truth where facial recognition technology is concerned. 

    In January, police in Detroit, USA, admitted its facial recognition software falsely identified a shoplifter, leading to his wrongful arrest.  Vendors such as IBM, Microsoft, and Amazon have maintained a ban on the sale of facial recognition technology to police and law enforcement, citing human rights concerns and racial discrimination. Most have urged governments to establish stronger regulations to govern and ensure the ethical use of facial recognition tools.  Amazon had said its ban would remain until regulators addressed issues around the use of its Rekognition technology to identify potential criminal suspects, while Microsoft said it would not sell facial recognition software to police until federal laws were in place to regulate the technology. IBM chose to exit the market completely over concerns facial recognition technology could instigate racial discrimination and injustice. Its CEO Arvind Krishna wrote in a June 2020 letter to the US Congress: “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and principles of trust and transparency. “AI is a powerful tool that can help law enforcement keep citizens safe. But vendors and users of Al systems have a shared responsibility to ensure that Al is tested for bias, particularity when used in law enforcement, and that such bias testing is audited and reported,” Krishna penned.  I recently spoke with Ieva Martinkenaite, who chairs the AI task force at GSMA-European Telecommunications Network Operators’ Association, which drafts AI regulation for the industry in Europe. Martinkenaite’s day job sees her as head of analytics and AI for Telenor Research. In our discussion on how Singapore could best approach the issue of AI ethics and use of the technology, Martinkenaite said every country would have to decide what it felt was acceptable, especially when AI was used in high risk areas such as in detecting criminals. Here, she noted, there remained challenges amidst evidence of discriminatory results including against certain ethnic groups and gender.   In deciding what was acceptable, she urged governments to have an active dialogue with citizens. She added that until veracity issues related to the analysis of varying skin colours and facial features were properly resolved, the use of such AI technology should not be deployed without any human intervention, proper governance, or quality assurance in place. Training AI for multi-ethnic SingaporeFacial recognition software has come under fire for its inaccuracy, in particular, in identifying people with darker skintones. An MIT 2017 study, which found that darker females were 32 times more likely to be misclassified than lighter males, pointed to the need for more phenotypically diverse datasets to improve the accuracy of facial recognition systems.  Presumably, AI and machine learning models trained with less data on one ethnic group would exhibit a lower degree of accuracy in in identifying individuals in that group.  Singapore’s population comprises 74.3% Chinese, 13.5% Malays, and 9% Indians, with the remaining 3.2% made up of other ethnic groups such as Eurasians. Should the country decide to tap facial recognition systems to identify individuals, must the data used to train the AI model consist of more Chinese faces since the ethnic group forms the population’s majority? If so, will that lead to a lower accuracy rate when the system is used to identify a Malay or Indian, since fewer data samples of these ethnic groups were used to train the AI model?  Will using an equal proportion of data for each ethnic group then necessarily lead to a more accurate score across the board? Since there are more Chinese residents in the country, should the facial recognition technology be better trained to more accurately identify this ethnic group because the system will likely be used more often to recognise these individuals?  These questions touch only on the “right” volume of data that should be used to train facial recognition systems. There still are many others concerning data alone, such as where training data should be sourced, how the data should be categorised, and how much training data is deemed sufficient before the system is considered “operationally ready”.  Singapore will have to navigate these carefully should it decide to tap AI in law enforcement and crime prevention, especially as it regards racial and ethnic relations important, but sensitive in managing. Beyond data, discussions and decisions will need to be made on, amongst others, when AI-powered facial recognition systems should be used, how automated should they be allowed to operate, and when human intervention would be required. The European Parliament just last week voted in support of a resolution banning law enforcement from using facial recognition systems, citing various risks including discrimination, opaque decision-making, privacy intrusion, and challenges in protecting personal data. 

    “These potential risks are aggravated in the sector of law enforcement and criminal justice, as they may affect the presumption of innocence, the fundamental rights to liberty and security of the individual and to an effective remedy and fair trial,” the European Parliament said.  Specifically, it pointed to facial recognition services such as Clearview AI, which had built a database of more than three billion pictures that were illegally collected from social networks and other online platforms.  The European Parliament further called for a ban on law enforcement using automated analysis of other human features, such as fingerprint, voice, gait, and other biometric and behavioural traits.  The resolution passed, though, isn’t legally binding. Because data plays an integral role in feeding and training AI models, what constitutes such data inevitably has been the crux of key challenges and concerns behind the technology.  The World Health Organisation (WHO) in June issued a guidance cautioning that AI-powered healthcare systems trained primarily on data of individuals in high-income countries might not perform well for individuals in low- and middle-income environments. It also cited other risks such as unethical collection and use of healthcare data, cybersecurity, and bias being encoded in algorithms.  “AI systems must be carefully designed to reflect the diversity of socioeconomic and healthcare settings and be accompanied by training in digital skills, community engagement, and awareness-raising,” it noted. “Country investments in AI and the supporting infrastructure should help to build effective healthcare systems by avoiding AI that encodes biases that are detrimental to equitable provision of and access to healthcare services.” Fostering trust goes beyond AISingapore’s former Minister for Communications and Information and Minister-in-charge of Trade Relations. S. Iswaran, previously acknowledged the tensions about AI and the use of data, and noted the need for tools and safeguards to better assure people with privacy concerns.  In particular, Iswaran stressed the importance of establishing trust, which he said underpinned everything, whether it was data or AI. “Ultimately, citizens must feel these initiatives are focused on delivering welfare benefits for them and ensured their data will be protected and afforded due confidentiality,” he said.   Singapore has been a strong advocate for the adoption of AI, introducing in 2019 a national strategy to leverage the technology to create economic value, enhance citizen lives, and arm its workforce with the necessary skillsets. The government believes AI is integral to its smart nation efforts and a nationwide roadmap was necessary to allocate resources to key focus areas. The strategy also outlines how government agencies, organisations, and researchers can collaborate to ensure a positive impact from AI, as well as directs attention to areas where change or potential new risks must be addressed as AI becomes more pervasive.  The key goal here is to pave the way for Singapore, by 2030, to be a leader in developing and deploying “scalable, impactful AI solutions” in key verticals. Singaporeans also will trust the use of AI in their lives, which should be nurtured from a clear awareness of the benefits and implications of the technology.  Building trust, however, will need to go beyond simply demonstrating the benefits of AI. People need to fully trust the authorities across various aspects of their lives and that any use of technology will safeguard their welfare and data. The lack of trust in one aspect can spill over and impact trust in other aspects, including the use of AI-powered technologies. Singapore in February urgently pushed through new legislation detailing the scope of local law enforcement’s access to COVID-19 contact tracing data. The move came weeks after it was revealed the police could access the country’s TraceTogether contact tracing data for criminal investigations, contradicting previous assertions this information would only be used when the individual tested positive for the coronavirus. It sparked a public outcry and prompted the government to announce plans for the new bill limiting police access to seven categories of “serious offences”, including terrorism and kidnapping.Early this month, Singapore also passed the Foreign Interference (Countermeasures) Bill amidst a heated debate and less than a month after it was first proposed in parliament. Pitched as necessary to combat threats from foreign interference in local politics, the Bill has been criticised for being overly broad in scope and judicial review restrictive. Opposition party Workers’ Party also pointed to the lack of public involvement and speed at which the Bill was passed.Will citizens trust their government’s use of AI-powered in “delivering welfare benefits”, especially in law enforcement, when they have doubts–correctly perceived or otherwise–their personal data in other areas is properly policed? Doubt in one policy can metastasise and drive further doubt in other policies. With trust, as Iswaran rightly pointed out, an integral part of driving the adoption of AI in Singapore, the government may need to review its approach to fostering this trust amongst its population. According to Deloitte, cities looking to use technology for surveillance and policing should look to balance security interests with the protection of civil liberties, including privacy and freedom. “Any experimentation with surveillance and AI technologies needs to be accompanied by proper regulation to protect privacy and civil liberties. Policymakers and security forces need to introduce regulations and accountability mechanisms that create a trustful environment for experimentation of the new applications,” the consulting firm noted. “Trust is a key requirement for the application of AI for security and policing. To get the most out of technology, there must be community engagement.”Singapore must assess whether it has indeed nurtured a trustful environment, with the right legislations and accountability, in which citizens are properly engaged in dialogue, so they can collectively decide what is the country’s acceptable use of AI in high risk areas. RELATED COVERAGE More

  • in

    Google analysed 80 million ransomware samples: Here's what it found

    Image: Google
    Google has published a new ransomware report, revealing Israel was far and away the largest submitter of samples during that period. The tech giant commissioned cybersecurity firm VirusTotal to conduct the analysis, which entailed reviewing 80 million ransomware samples from 140 countries. According to the report [PDF], Israel, South Korea, Vietnam, China, Singapore, India, Kazakhstan, Philippines, Iran and the UK were the 10 most affected territories based on the number of submissions reviewed by VirusTotal. Israel had the higher number of submissions and that amount was a near-600% increase from its baseline amount of submissions. The report did not state what Israel’s baseline amount of submissions was during that period. From the start of 2020, ransomware activity was at its peak during the first two quarters of 2020, which VirusTotal attributed to activity by ransomware-as-a-service group GandCrab. “GandCrab had an extraordinary peak in Q1 2020 which dramatically decreased afterwards. It is still active but at a different order of magnitude in terms of the number of fresh samples,” VirusTotal said. There was another sizeable peak in July 2021 that was driven by the Babuk ransomware gang, a ransomware operation that was launched at the beginning of 2021. Babuk’s ransomware attack generally features three distinct phases: Initial access, network propagation, and action on objectives.

    GandCrab was the most active ransomware gang since the start of 2020, accounting for 78.5% of samples. GandCrab was followed by Babuk and Cerber, which accounted for 7.6% and 3.1% of samples, respectively.
    Image: Google
    According to the report, 95% of ransomware files detected were Windows-based executables or dynamic link libraries (DLLs) and 2% were Android-based. The report also found that exploits consisted of only a small portion of the samples — 5%. “We believe this makes sense given that ransomware samples are usually deployed using social engineering and/or by droppers (small programs designed to install malware),” VirusTotal said. “In terms of ransomware distribution attackers don’t appear to need exploits other than for privilege escalation and for malware spreading within internal networks.”  After reviewing the samples, VirusTotal also said that there was a baseline of between 1,000 and 2,000 first-seen ransomware clusters at all times throughout the analysed period. “While big campaigns come and go, there is a constant baseline of ransomware activity that never stops,” it said. Related Coverage More

  • in

    Brazilian e-commerce firm Hariexpress leaks 1.75 billion sensitive files

    Around 1.75 billion sensitive files were leaked by a Brazilian e-commerce integrator that provides services to some of the country’s largest online shopping websites.Hariexpress is headquartered in São Paulo and integrates multiple processes into a single platform to improve the efficiency and operational capability of retailers with more than one e-commerce store. Some of the company’s clients include Magazine Luiza, Mercado Livre, Amazon and B2W Digital. The national postal service, Correios, is also among the company’s partners and was also impacted by the incident.

    According to security researcher Anurag Sen at Safety Detectives, who discovered the leak in July 2021, the incident is attributed to a misconfigured and unprotected ElasticSearch server and involves more than 610GB of exposed data. The researchers noted they were unsuccessful in their attempts to resume communication with the company after an initial contact. Banking information relating to customers was not compromised, according to the experts; on the other hand, the leak exposed a vast set of sensitive information including customers’ full names, e-mail addresses, business and residential addresses, company registration and social security numbers. In addition, all manner of details relating to purchases including dates, times and prices of products sold, as well as copies of invoices and login credentials to the Hariexpress service were also exposed, according to Safety Detectives. The researchers could not estimate the exact number of impacted users, due to the amount of duplicate email addresses found in the exposed set of data, but it is estimated that several thousands of users were potentially affected by the leak.Moreover, it is not possible to tell whether other parties had access to the data, according to the researchers. The experts warned that the data set, which contains information that directly identifies users of marketplaces integrated by the company, could be used in phishing and social engineering attacks. The report also warned about the potential for other types of crimes such as burglaries, as the data exposed includes residential and business addresses and extortion, since the information also includes purchases of intimate products. Contacted by ZDNet, the company did not respond to requests for comment. Brazil’s National Data Protection Agency was also contacted for comment on the case and had not responded at the time of publication. More

  • in

    Irish regulators support Facebook's 'consent bypass' legal maneuver, suggest $42 million fine for GDPR violations

    Regulators in Ireland have proposed up to $42 million in fines for Facebook after the company was accused of violating the GDPR through deceptive data collection policies. Privacy expert Max Schrems and his advocacy group nyob — which submitted the original complaint against Facebook — published a draft decision from the Irish Data Protection Commission (DPC) about the issue that was sent to the other European Data Protection Authorities.The decision suggests a fine of between $32 million and $42 million for Facebook’s violations of the GDPR, which include a failure to notify its customers about how it uses their data. Schrems and other privacy experts slammed the proposed fine for its relatively minuscule size and for the legal arguments Facebook is making to get out of more strict fines. Nyob said Facebook’s argument is effectively that it is exempt from most GDPR rules because of a minor change in its agreement with users.”Facebook’s legal argument is rather simple: By interpreting the agreement between user and Facebook as a ‘contract’ (Article 6(1)(b) GDPR) instead of ‘consent’ (Article 6(1)(a) GDPR) the strict rules on consent under the GDPR would not apply to Facebook — meaning that Facebook can use all data it has for all products it provides, including advertisement, online tracking and alike, without asking users for freely given consent that they could withdraw at any time,” nyob explained in a blog post. “Facebook’s switch from ‘consent’ to ‘contract’ happened on 25.5.2018 at midnight — exactly when the GDPR came into effect in the EU.”

    Schrems said it is painfully obvious that Facebook is trying to bypass the rules of the GDPR by relabeling the agreement on data use as a ‘contract’. If this is accepted by regulators, any company could simply write the processing of data into a contract and thereby legitimize any use of customer data without consent, Schrems explained.”This is absolutely against the intentions of the GDPR, that explicitly prohibits to hide consent agreements in terms and conditions,” Schrems said. Nyob noted that studies have shown users do not see the website’s terms of service as a contract. A Gallup Institute survey said just 1.6% of respondents saw the agreement they make with Facebook when they sign up for the site as a “contract.” More than 63% said they see the agreement as consent.Schrems and nyob also made charged claims in the blog post, writing that representatives from Facebook and the DPC met in 2018 and created a way for Facebook to get around certain GDPR regulations.He went on to explain that regulators were fining Facebook for “not being transparent” about how it processes data but still expressed support for the company’s “consent bypass.”Both Facebook and the DPC did not respond to requests for comment.”The DPC developed the ‘GDPR bypass’ with Facebook that it is now greenlighting as a regulator. Instead of a regulator, it acts as a ‘big tech’ advisor,” Schrems said. “Basically the DPC says Facebook can bypass the GDPR, but they must be more transparent about it. With this approach, Facebook can continue to process data unlawfully, add a line to the privacy policy and just pay a small fine, while the DPC can pretend they took some action.”Schrems also took issue with how the DPC analyzed nyob’s complaint, criticizing the regulators for omitting key parts of their submission and refusing oral hearings. The draft was sent to other data protection authorities across Europe and will now be reviewed. Regulators from other countries can submit complaints, which will then be handled by the European Data Protection Board. The board can overrule decisions made by Irish regulators. WhatsApp was slapped with a 225 million euro fine last month after a GDPR investigation found that the platform was not transparent about how it shared data with its parent company, Facebook. In that case, Irish regulators faced similar backlash for the initial 50 million euro fine. The European Data Protection Board overruled the DPC and increased it significantly. “Our hope lies with the other European authorities. If they do not take action, companies can simply move consent into terms and thereby bypass the GDPR for good,” Schrems said. Privacy expert Cillian Kieran told ZDNet the fine mentioned in the draft is just one-hundredth of the possible fine under GDPR. Kieran also took issue with how the DPC represented Facebook’s position and the core tenets of their argument. He said that there needs to be consistent legal definitions designed into the technical systems themselves. “How can the fine in the draft decision, an amount which Facebook recovers in revenue within less than 5 hours on average, possibly be dissuasive? Much of the decision goes into countering allegations that Facebook violated consent requirements. The decision argues that consent is not necessary in this situation, nullifying any issues of consent. This points to a serious disparity in how authorities, advocates, and end-users like the complainant view the principles of processing under GDPR,” Kieran said. “Maybe if the Irish DPC did not form a bottleneck on dozens of GDPR investigations, we would be getting these vital interpretations on consent and other legal bases sooner than three and a half years after GDPR takes effect. I agree with Schrems that this decision is disappointing and inadequate, both in the fine and in the interpretation of contracts versus consent.”

    more on GDPR More

  • in

    Marketers want to influence your dreams, consumers not so much

    Digital marketers are wildly bullish on dream tech — playing ads right before people sleep to influence dreams — and 39% of consumers are open to the technology too, according to a survey.The American Marketing Association-New York’s 2021 Future of Marketing Survey canvased the marketing technology landscape relative to 2019’s report. Overall, consumers are beginning to accept new marketing technology, but worried about privacy. What caught my eye in the survey was dream-tech, which was opposed by 32% of consumers, supported by 39% with the remainder falling in the don’t know category. Given this dream-tech concept wasn’t around in 2019, the favorability rating is a bit stunning. Here’s how favorability among consumers stacks up across marketing channels. Add it up and consumers are accepting of personalized ads (54% in favor); IoT devices (53%) and AI assistants (60%). Virtual reality headsets are viewed favorably by 61% of consumers and augmented reality devices checked in at 49%. In other words, dream-tech is off to a good start with consumers even if the definition of it remains a bit murky.The report also looked at marketers’ expectations and what technologies would be adopted at scale. The kicker: 77% of marketers declared that they would deploy more dream-tech in the next three years. That tally topped smart speakers and IoT devices. I can’t wait to see how this consumer vs. marketing adoption off dream-tech plays out. Here’s a guess: Facebook figures out who looks at the app before bed and hits you with something to influence your dreams. Congressional hearings will ensue — again — but at least Facebook is used to it.  One area of agreement was data collection and how it’s a privacy issue. Consumers would limit data collection to email, age and name and marketers generally agreed. Marketers were more comfortable with collecting location than consumers. Fifty-four percent of marketers want to collect location data and only 41% of consumers want to part with it.  More

  • in

    Best Android VPN 2021: Our top four

    Photo by Daniel Romero on Unsplash
    One of the things most interesting about the Android OS is the wide variety of devices it’s available in. Sure, there are Android phones and tablets. But Android also functions inside most recent Chromebooks and now — improbably, but in a fully supported way — Windows 11 devices, even Intel-based Windows 11 computers. That diversity of deployment makes the Android implementations of VPN clients particularly interesting. If, for example, you want to run a VPN on your Chromebook, your best bet is to install an Android VPN client and let that client do all the heavy lifting. We discussed that in-depth in our Best VPN for Chrome and Chromebooks 2021 guide. Unfortunately, the more open environment of Android means that there are many different implementations, versions aren’t regularly updated, and as this interesting piece by the NordVPN folks shows, malware is more prevalent. That makes inherent malware scanning within the VPN client particularly helpful. In this overview, we look at four of the most popular Android VPNs. Here’s what we think:

    4.3 Google Play Store average, 446K ratings

    Family Sharing: YesMalware Scanner: YesSimultaneous Connections: 6Kill Switch: YesPlatforms: Windows, Mac, iOS, Android, Linux, Android TV, Chrome, FirefoxLogging: None, except billing dataCountries: 59Servers: 5517Trial/MBG: 30 dayAlso: How does NordVPN work? Plus how to set it up and use itNordVPN is one of the most popular consumer VPNs out there. Last year, Nord announced that it had been breached. Unfortunately, the breach had been active for more than 18 months. While there were failures at every level, NordVPN has taken substantial efforts to remedy the breach.Also: My in-depth review of NordVPNIn our review, we liked that it offered capabilities beyond basic VPN, including support of P2P sharing, a service it calls Double VPN that does a second layer of encryption, Onion over VPN which allows for TOR capabilities over its VPN, and even a dedicated IP if you’re trying to run a VPN that also doubles as a server. It supports all the usual platforms and a bunch of home network platforms as well. The company also offers NordVPN Teams, which provides centralized management and billing for a mobile workforce.Also: My interview with NordVPN management on how they run their servicePerformance testing was adequate, although ping speeds were slow enough that I wouldn’t want to play a twitch video game over the VPN. To be fair, most VPNs have pretty terrible ping speeds, so this isn’t a weakness unique to Nord. Overall, a solid choice, and with a 30-day money-back guarantee, worth a try.

    4.3 Google Play Store average, 220K ratings

    Family Sharing: YesMalware Scanner: NoSimultaneous Connections: 5 or unlimited with the router appKill Switch: YesPlatforms: A whole lot (see the full list here)Logging: No browsing logs, some connection logsCountries: 94Locations: 160Trial/MBG: 30 daysExpressVPN has been burning up the headlines with some pretty rough news. We’ve chosen to leave ExpressVPN in this recommendation, and I wouldn’t necessarily dismiss ExpressVPN out of hand because of these reports, but it’s up to you to gauge your risk level. The best way to do that is read our in-depth analysis:ExpressVPN is one of the most popular VPN providers out there, offering a wide range of platforms and protocols. Platforms include Windows, Mac, Linux, routers, iOS, Android, Chromebook, Kindle Fire, and even the Nook device. There are also browser extensions for Chrome and Firefox. Plus, ExpressVPN works with PlayStation, Apple TV, Xbox, Amazon Fire TV, and the Nintendo Switch. There’s even a manual setup option for Chromecast, Roku, and Nvidia Switch.Must read:With 160 server locations in 94 countries, ExpressVPN has a considerable VPN network across the internet. In CNET’s review of the service, staff writer Rae Hodge reported that ExpressVPN lost less than 2% of performance with the VPN enabled and using the OpenVPN protocol vs. a direct connection.While the company does not log browsing history or traffic destinations, it does log dates connected to the VPN service, amount transferred, and VPN server location. We do want to give ExpressVPN kudos for making this information very clear and easily accessible.Exclusive offer: Get 3 extra months free.

    4.2 Google Play Store average, 15K ratings

    Family Sharing: YesMalware Scanner: NoSimultaneous Connections: UnlimitedKill Switch: YesPlatforms: Windows, Mac, iOS, Android, Linux, Chrome, plus routers, Fire Stick, and KodiLogging: None, except billing dataServers: 1,500 Locations: 75Trial/MBG: 30 dayIPVanish is a deep and highly configurable product that presents itself as a click-and-go solution. I think the company is selling itself short doing this. A quick visit to its website shows a relatively generic VPN service, but that’s not the whole truth.Also: My in-depth review of IPVanishIts UI provides a wide range of server selection options, including some great performance graphics. It also has a wide variety of protocols, so no matter what you’re connecting to, you can know what to expect. The company also provides an excellent server list with good current status information. There’s also a raft of configuration options for the app itself.In terms of performance, connection speed was crazy fast. Overall transfer performance was good. However, from a security perspective, it wasn’t able to hide that I was connecting via a VPN — although the data transferred was secure. Overall, a solid product with a good user experience that’s fine for home connections as long as you’re not trying to hide the fact that you’re on a VPN.The company also has a partnership with SugarSync and provides 250GB of encrypted cloud storage with each plan.

    4.0 Google Play Store average, 36K ratings

    Family Sharing: YesMalware Scanner: YesSimultaneous Connections: UnlimitedKill Switch: YesPlatforms: Windows, Mac, Linux, iOS, Android, Fire TV, Firefox, ChromeLogging: None, except billing dataTrial/MBG: 30 dayAt two bucks a month for a two-year plan (billed in one chunk), Surfshark offers a good price for a solid offering. In CNET’s testing, no leaks were found (and given that much bigger names leaked connection information, that’s a big win). The company seems to have a very strong security focus, offering AES-256-GCM, RSA-2048, and Perfect Forward Secrecy encryption. To prevent WebRTC leaks, Surfshark offers a special purpose browser plugin designed specifically to combat those leaks.Must read:Surfshark’s performance was higher than NordVPN and Norton Secure VPN, but lower than ExpressVPN and IPVanish. That said, Surfshark also offers a multihop option that allows you to route connections through two VPN servers across the Surfshark private network. We also like that the company offers some inexpensive add-on features, including ad-blocking, anti-tracking, access to a non-logging search engine, and a tool that tracks your email address against data breach lists.

    Will these apps work on all Android devices?

    Probably not. Unfortunately, many Android-based devices are not updated to the latest Android releases and have no update path. Sadly, some vendors even ship brand-new devices running older (and far more vulnerable) versions of Android. Generally, VPN vendors make sure their clients run on the most recent and a few previous versions of Android, but since there are still a tremendous number of devices in service running very out-of-date Android, it’s unlikely those will be able to run these apps. That’s why it’s good to take advantage of the money-back offerings and test your download shortly after purchase.

    What’s the difference between anti-malware software and VPN software?

    While both technologies are intended to protect you and your device, they protect different aspects of your usage. VPNs fundamentally protect data-in-motion, that is the data being sent to and from the internet. The protection they generally offer is encryption, so hackers can spy on the data while it moves. Anti-malware software protects against execution of bad software on your device. Those apps often scan inside the data as it comes into your machine, look at the apps on your machine, and intercept the actions of apps while they’re running on your machine.As an analogy, think of VPN software as an armored car moving a payload from one location to another in safety. Think of anti-malware as building inspectors constantly looking at your building’s infrastructure to see if there’s any, say, mold and as gatekeepers, checking everything that passes through to make sure it’s not harmful.

    Why do I even need a VPN on my phone?

    This question is often asked by people who know their phone’s data runs through their local carrier, which is moderately hard for hackers to intercept. And, generally, if you’re using your carrier’s LTE or 5G connection, you’re reasonably safe. But carriers have data caps and data carriage fees that can get expensive. Even if you have an unlimited data plan, carriers charge for hotspot use (ask me how I know, or how much that pisses me off). The way around that is to use whatever local Wi-Fi is available. Many coffee shops, airport lounges, hotels, and schools offer free Wi-Fi access. Unfortunately, that Wi-Fi is often open and easy to intercept. A big (and very important) use of VPNs on phones is to protect your data when you’re accessing the internet through one of these hotspots. In fact, I’d go so far as to say never, ever access the internet through a Wi-Fi hotspot without an active VPN on your device.

    You can follow my day-to-day project updates on social media. Be sure to follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV. More