HOTTEST
In its quest to drive the adoption of artificial intelligence (AI) across the country, multi-ethnic Singapore needs to take special care navigating its use in some areas, specifically, law enforcement and crime prevention. It should further foster its belief that trust is crucial for citizens to be comfortable with AI, along with the recognition that doing so will require nurturing public trust across different aspects within its society. It must have been at least two decades ago now when I attended a media briefing, during which an executive was demonstrating the company’s latest speech recognition software. As most demos went, no matter how much you prepared for it, things would go desperately wrong. Her voice-directed commands often were wrongly executed and several spoken words in every sentence were inaccurately translated into text. The harder she tried, the more things went wrong, and by the end of the demo, she looked clearly flustered. She had a relatively strong accent and I’d assumed that was likely the main issue, but she had spent hours training the software. This company was known, at that time, specifically for its speech recognition products so it wouldn’t be wrong to assume its technology then was the most advanced in the market. I walked away from that demo thinking it would be near impossible, with the vast difference in accents within Asia alone and even amongst those who spoke the same language, for speech recognition technology to be sufficiently accurate.
Singapore wants widespread AI use in smart nation drive
With the launch of its national artificial intelligence (AI) strategy, alongside a slew of initiatives, the Singapore government aims to fuel AI adoption to generate economic value and provide a global platform on which to develop and testbed AI applications.
Read More
Some 20 years later, today, speech-to-text and translation tools clearly have come a long way, but they’re still not always perfect. An individual’s accent and speech patterns remain key variants that determine how well spoken words are translated. However, wrongly converted words are unlikely to cause much damage, safe from a potentially embarrassing moment on the speaker’s part. The same is far from the truth where facial recognition technology is concerned.
In January, police in Detroit, USA, admitted its facial recognition software falsely identified a shoplifter, leading to his wrongful arrest. Vendors such as IBM, Microsoft, and Amazon have maintained a ban on the sale of facial recognition technology to police and law enforcement, citing human rights concerns and racial discrimination. Most have urged governments to establish stronger regulations to govern and ensure the ethical use of facial recognition tools. Amazon had said its ban would remain until regulators addressed issues around the use of its Rekognition technology to identify potential criminal suspects, while Microsoft said it would not sell facial recognition software to police until federal laws were in place to regulate the technology. IBM chose to exit the market completely over concerns facial recognition technology could instigate racial discrimination and injustice. Its CEO Arvind Krishna wrote in a June 2020 letter to the US Congress: “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and principles of trust and transparency. “AI is a powerful tool that can help law enforcement keep citizens safe. But vendors and users of Al systems have a shared responsibility to ensure that Al is tested for bias, particularity when used in law enforcement, and that such bias testing is audited and reported,” Krishna penned. I recently spoke with Ieva Martinkenaite, who chairs the AI task force at GSMA-European Telecommunications Network Operators’ Association, which drafts AI regulation for the industry in Europe. Martinkenaite’s day job sees her as head of analytics and AI for Telenor Research. In our discussion on how Singapore could best approach the issue of AI ethics and use of the technology, Martinkenaite said every country would have to decide what it felt was acceptable, especially when AI was used in high risk areas such as in detecting criminals. Here, she noted, there remained challenges amidst evidence of discriminatory results including against certain ethnic groups and gender. In deciding what was acceptable, she urged governments to have an active dialogue with citizens. She added that until veracity issues related to the analysis of varying skin colours and facial features were properly resolved, the use of such AI technology should not be deployed without any human intervention, proper governance, or quality assurance in place. Training AI for multi-ethnic SingaporeFacial recognition software has come under fire for its inaccuracy, in particular, in identifying people with darker skintones. An MIT 2017 study, which found that darker females were 32 times more likely to be misclassified than lighter males, pointed to the need for more phenotypically diverse datasets to improve the accuracy of facial recognition systems. Presumably, AI and machine learning models trained with less data on one ethnic group would exhibit a lower degree of accuracy in in identifying individuals in that group. Singapore’s population comprises 74.3% Chinese, 13.5% Malays, and 9% Indians, with the remaining 3.2% made up of other ethnic groups such as Eurasians. Should the country decide to tap facial recognition systems to identify individuals, must the data used to train the AI model consist of more Chinese faces since the ethnic group forms the population’s majority? If so, will that lead to a lower accuracy rate when the system is used to identify a Malay or Indian, since fewer data samples of these ethnic groups were used to train the AI model? Will using an equal proportion of data for each ethnic group then necessarily lead to a more accurate score across the board? Since there are more Chinese residents in the country, should the facial recognition technology be better trained to more accurately identify this ethnic group because the system will likely be used more often to recognise these individuals? These questions touch only on the “right” volume of data that should be used to train facial recognition systems. There still are many others concerning data alone, such as where training data should be sourced, how the data should be categorised, and how much training data is deemed sufficient before the system is considered “operationally ready”. Singapore will have to navigate these carefully should it decide to tap AI in law enforcement and crime prevention, especially as it regards racial and ethnic relations important, but sensitive in managing. Beyond data, discussions and decisions will need to be made on, amongst others, when AI-powered facial recognition systems should be used, how automated should they be allowed to operate, and when human intervention would be required. The European Parliament just last week voted in support of a resolution banning law enforcement from using facial recognition systems, citing various risks including discrimination, opaque decision-making, privacy intrusion, and challenges in protecting personal data.
“These potential risks are aggravated in the sector of law enforcement and criminal justice, as they may affect the presumption of innocence, the fundamental rights to liberty and security of the individual and to an effective remedy and fair trial,” the European Parliament said. Specifically, it pointed to facial recognition services such as Clearview AI, which had built a database of more than three billion pictures that were illegally collected from social networks and other online platforms. The European Parliament further called for a ban on law enforcement using automated analysis of other human features, such as fingerprint, voice, gait, and other biometric and behavioural traits. The resolution passed, though, isn’t legally binding. Because data plays an integral role in feeding and training AI models, what constitutes such data inevitably has been the crux of key challenges and concerns behind the technology. The World Health Organisation (WHO) in June issued a guidance cautioning that AI-powered healthcare systems trained primarily on data of individuals in high-income countries might not perform well for individuals in low- and middle-income environments. It also cited other risks such as unethical collection and use of healthcare data, cybersecurity, and bias being encoded in algorithms. “AI systems must be carefully designed to reflect the diversity of socioeconomic and healthcare settings and be accompanied by training in digital skills, community engagement, and awareness-raising,” it noted. “Country investments in AI and the supporting infrastructure should help to build effective healthcare systems by avoiding AI that encodes biases that are detrimental to equitable provision of and access to healthcare services.” Fostering trust goes beyond AISingapore’s former Minister for Communications and Information and Minister-in-charge of Trade Relations. S. Iswaran, previously acknowledged the tensions about AI and the use of data, and noted the need for tools and safeguards to better assure people with privacy concerns. In particular, Iswaran stressed the importance of establishing trust, which he said underpinned everything, whether it was data or AI. “Ultimately, citizens must feel these initiatives are focused on delivering welfare benefits for them and ensured their data will be protected and afforded due confidentiality,” he said. Singapore has been a strong advocate for the adoption of AI, introducing in 2019 a national strategy to leverage the technology to create economic value, enhance citizen lives, and arm its workforce with the necessary skillsets. The government believes AI is integral to its smart nation efforts and a nationwide roadmap was necessary to allocate resources to key focus areas. The strategy also outlines how government agencies, organisations, and researchers can collaborate to ensure a positive impact from AI, as well as directs attention to areas where change or potential new risks must be addressed as AI becomes more pervasive. The key goal here is to pave the way for Singapore, by 2030, to be a leader in developing and deploying “scalable, impactful AI solutions” in key verticals. Singaporeans also will trust the use of AI in their lives, which should be nurtured from a clear awareness of the benefits and implications of the technology. Building trust, however, will need to go beyond simply demonstrating the benefits of AI. People need to fully trust the authorities across various aspects of their lives and that any use of technology will safeguard their welfare and data. The lack of trust in one aspect can spill over and impact trust in other aspects, including the use of AI-powered technologies. Singapore in February urgently pushed through new legislation detailing the scope of local law enforcement’s access to COVID-19 contact tracing data. The move came weeks after it was revealed the police could access the country’s TraceTogether contact tracing data for criminal investigations, contradicting previous assertions this information would only be used when the individual tested positive for the coronavirus. It sparked a public outcry and prompted the government to announce plans for the new bill limiting police access to seven categories of “serious offences”, including terrorism and kidnapping.Early this month, Singapore also passed the Foreign Interference (Countermeasures) Bill amidst a heated debate and less than a month after it was first proposed in parliament. Pitched as necessary to combat threats from foreign interference in local politics, the Bill has been criticised for being overly broad in scope and judicial review restrictive. Opposition party Workers’ Party also pointed to the lack of public involvement and speed at which the Bill was passed.Will citizens trust their government’s use of AI-powered in “delivering welfare benefits”, especially in law enforcement, when they have doubts–correctly perceived or otherwise–their personal data in other areas is properly policed? Doubt in one policy can metastasise and drive further doubt in other policies. With trust, as Iswaran rightly pointed out, an integral part of driving the adoption of AI in Singapore, the government may need to review its approach to fostering this trust amongst its population. According to Deloitte, cities looking to use technology for surveillance and policing should look to balance security interests with the protection of civil liberties, including privacy and freedom. “Any experimentation with surveillance and AI technologies needs to be accompanied by proper regulation to protect privacy and civil liberties. Policymakers and security forces need to introduce regulations and accountability mechanisms that create a trustful environment for experimentation of the new applications,” the consulting firm noted. “Trust is a key requirement for the application of AI for security and policing. To get the most out of technology, there must be community engagement.”Singapore must assess whether it has indeed nurtured a trustful environment, with the right legislations and accountability, in which citizens are properly engaged in dialogue, so they can collectively decide what is the country’s acceptable use of AI in high risk areas. RELATED COVERAGE More
Maria Diaz/ZDNET Quantum computing may still be an emerging technology, but Apple is two steps ahead of other tech giants by releasing quantum-secure messaging. The company introduced PQ3 for iMessage, bringing the iOS messaging app up to level 3 of cryptographic security, the highest level applied to a messaging app. Most other messaging apps are […] More
RiskRecon, a Mastercard company, and the Cyentia Institute released a study on Tuesday showing that some multi-party data breaches cause 26-times the financial damage of the worst single-party breach.The organizations used Advisen’s Cyber Loss Database to examine incidents since 2008. Almost 900 multi-party breach incidents have been observed since 2008, and 147 newly uncovered ripples were observed across the entire data set, with 108 occurring in the last three years.
ZDNet Recommends
The best cyber insurance
The cyber insurance industry is likely to go mainstream and is a simple cost of doing business. Here are a few options to consider.
Read More
The Advisen Cyber Loss Database has over 103,000 cyber events collected from publicly verifiable sources and was used extensively for the report. Since 2008, more than 2,726 incidents in the Advisen database involve more than one organization. Still, only a subset of those are what the researchers called “ripple events” — which involve some form of B2B relationships between multiple parties. Using that as a filter, the incident base totaled 897 incidents from 2008 to 2020. More than half of the newly identified ripples were in 2019 and 2020, and the report postulated that there is a two-year delay between when an incident takes place and when the ripple effects fully unfold, with some taking as long as five years. A median multi-party breach causes 10 times the financial damage of a traditional single-party breach. In comparison, the worst of the multi-party breach events causes 26 times the financial damage of the worst single-party breach. It typically takes 379 days for a ripple event to impact 75% of its downstream victims, and the median number of organizations impacted by ripple events across the data set was 4.”While a stable number for multi-party breaches in 2020 is not likely, our analysis has already dug up 37 ripple events that swept up victims across a range of industries and scenarios last year,” the report said. “The triggering events are often different, the business relationships vary, the scope of impact can vary wildly, and the depth of downstream reach is changeable. The one unifying factor is the technical integration or data sharing — direct and indirect — that spiderwebs across the generating organization and the recipients of downstream loss events.”
The report lists a number of notable multi-party breaches, including incidents involving SolarWinds, Accellion — which affected the Washington State Auditor’s Office, New Zealand’s central bank, and the high-profile law firm Jones Day — Advanced Computer Software, which exposed hundreds of law firms, the cloud computing provider Blackbaud and more. In each incident, the personal data of millions was exposed, and the researchers found that financial and business support organizations dominate the top two slots in terms of ripple-generating victims and recipients of downstream loss events. The professional and financial sectors together are the source of over 47% of all ripples.
“Many companies are, at some point, both the generator of one ripple event and the downstream recipient of others generated by different organizations. This is a testament to the tight technical ties that bind suppliers, customers, and partners in today’s digitally dominated business environment,” the report explained. “Among those ripple events for which we have cost information, 80% involve some sort of direct financial damage. One out of five of the ripples involved ends up incurring fines and penalties, and one in 10 of them incurs response costs. While only a small fraction of ripples cause a loss of business income, such losses are particularly devastating. In those cases, the loss of income makes up 78% of costs.” The researchers found that when a ripple event triggers a loss of income, it leads to a loss of $36 million per event. Parsing through a subset of 154 ripples, the report found that most costs are borne by the initial victims of a multi-party breach. “From the data presented in this report, one thing should be crystal clear — no organization is safe from a multi-party ripple event. As firms of all shapes and sizes continue to allow companies to access their data, client information, employee details, etc., they also open up more paths for security incidents that can harm their business,” the report’s authors explained. “The reality is while you can’t protect yourself from every third-party threat, you can take control over the risks that will impact your business the most. The interconnectivity of different third- and fourth-party relationships is often hard to visualize and address.”There was a significant drop in the amount of time for ripples to disperse through third-party networks in 2012 and 2013 to less than 200 days, while the number dropped to 50 days in 2018.The report also looked at the duration of ripples from another angle, examining the intervals of time it took for some, half, and most of the downstream recipients to feel the impact of a multi-party incident.”Overall, 25% of firms are involved within 32 days after the initial event, 50% by 151 days, and 75% by just over a year at 379 days. This shows that the fastest impacts rippled out from incidents within healthcare, likely due to the strong reporting requirements in that space. Meantime, the hospitality and information industries take approximately a year before most downstream victims fully feel a ripple,” the report found. More
Email phishing attacks and brute force attacks against exposed remote desktop protocol (RDP) services are the most common methods cyber criminals are using to gain an initial foothold in corporate networks to lay the foundations for ransomware attacks.
Cybersecurity researchers at Coveware analysed ransomware attacks during the second quarter of this year and have detailed how phishing attacks and RDP attacks are the most popular entry points for starting ransomware attacks. Part of the appeal for cyber criminals is that these are low-cost to carry out while also being effective. Phishing attacks – where cyber criminals send emails containing a malicious attachment or direct victims towards a compromised website which delivers ransomware – have slightly grown in popularity over the last quarter, accounting for 42 percent of attacks. Meanwhile, attacks against RDP services, where cyber criminals brute force weak or default usernames and passwords – or sometimes gain access to legitimate credentials via phishing emails – remain extremely popular with ransomware groups, also accounting for 42 percent of attacks. Both phishing and RDP attacks remain effective as they’re relatively simple for cyber criminals to carry out but, if carried out successfully, can provide them with a gateway to a whole corporate network. Breaching RDP credentials is particularly useful, because it allows attackers to enter the network with legitimate logins, making malicious activity more difficult to detect. Software vulnerabilities are in a distant third place as the most popular vector for breaching networks to deliver ransomware, accounting for 14 percent of attacks, but that doesn’t make them any less dangerous – especially as they’re often leveraged by some of the most sophisticated and disruptive ransomware gangs. SEE: Cybersecurity: Let’s get tactical (ZDNet/TechRepublic special feature) | Download the free PDF version (TechRepublic)
According to Coveware, Sodinokibi – also known as REvil – accounted for the highest percentage of ransomware attacks during the reporting period at 16.5 percent. REvil is responsible for some of the most high-profile ransomware attacks this year, including the massive ransomware attack on customers of Kaseya. In recent weeks, REvil’s infrastructure has mysteriously gone offline. The second most prolific ransomware during the period was Conti, accounting for 14.4 percent of ransomware. One of the most high-profile attacks by the group was the attack against the Irish healthcare system. In the end, Conti provided the decryption key for free, but healthcare services across Ireland remained disrupted for months. The third most prolific ransomware during the three months between April and June was Avaddon, a form of ransomware distributed via phishing emails, which accounted for 5.4 percent of attacks. In June, the group behind Avaddon announced they were shutting down and released a decryption key for the ransomware. New forms of ransomware Mespinoza and Hello Kitty make up the rest of the top five – and it’s likely that with groups like REvil and Avaddon seemingly shutting down, new ransomware groups will attempt to replace them.What all these ransomware groups have in common is how they exploit the likes of phishing attacks and weaknesses in RDP services to lay the foundation for attacks. To help protect networks from being compromised organisations can apply multi-factor authentication across the network, something which can help stop intruders from breaching accounts. It’s also recommended that organisations apply software updates and security patches when they are released in order to prevent attackers from exploiting known vulnerabilities to gain access to networks. MORE ON CYBERSECURITY More
Image: Pixabay user dokumol
I have long advocated keeping machines up to date. When machines become too old to update, I’ve bitten the bullet and dumped them, even if they were still fully functional.With all the malware and ransomware, not to mention simple flaws that could cause a system to crash, it’s become necessary to keep machines up to date, regularly updating both operating system and applications software. When that software can no longer be updated, it’s time to toss the machine. But should it be? I just finished upgrading my small fleet of older Macs. I pulled one iMac and four Mac minis out of service. The iMac went to a friend who’s tech savvy enough and responsible enough to manage his own security. But those four Mac minis are now sitting on a shelf. I’d like to donate them to a local school or library. But because they can’t be upgraded to the latest versions of MacOS (and can’t have the latest security fixes), I won’t give them to unsuspecting muggles, no matter how deserving they might be. Making donations of woefully out-of-date machines that can’t get security updates isn’t an act of charity, it’s creating potential victims. But here’s the thing. Even though those Mac minis are eight and nine years old, they are perfectly functional. Given Apple’s build quality, there is no reason they wouldn’t keep chugging along for another eight or nine years. The modern tech lifecycle
Most IT folk understand and probably even agree with the modern tech lifecycle. Put simply, as newer releases of computers and operating systems come out, older software and hardware become obsoleted. Vendors don’t want to continue to support systems that are quite old. Developers don’t want to test against numerous generations of older machines. The cost to maintain and update the dregs of old gear is impractical. It’s also impractical, because features that run like the wind on new hardware can be dog slow on older hardware. Some features (for example Face ID on iOS devices) simply won’t run on older hardware because of intrinsic limits on that older hardware (like not having fast enough processing power, the right GPU, or the necessary lenses). As an independent developer, I can’t support and test versions of code for users running very out-of-date software or hardware. I barely have the time to support and test the more current releases. So, as a developer, I concur with the idea that tech becomes obsolete over time, and it’s regularly necessary to move on. A paradigm shift But as I looked at those four perfectly functional Mac minis sitting in a stack on a shelf, never to process bits ever again, I found myself getting upset. It’s one thing for an independent developer to set a baseline for version or operating system support. It’s another for Apple, the world’s most valuable company, with a valuation in the trillions of dollars. It’s not like Apple can’t afford to make sure even its oldest machines stay safe year after year. What would that cost? The salary of a hundred engineers would be, roughly — in Silicon Valley dollars — about $20 million. Let’s say facilities and gear for those hundred engineers is another $20 million. Does anyone seriously think Apple can’t afford $40 million a year to keep software up to date? In its second quarter, Apple posted revenues of $89.6 billion (up 54 percent year over year). $40 million isn’t even 0.05% of Apple’s quarterly revenue. Heck, $40 million is only 15% of Tim Cook’s $265 million 2020 compensation package. He could pay to keep all installed Macs up to date and it would cost him the equivalent compensation percentage of what putting a fence up would cost to us normal folk.
There are some natural constraints to this “keep everything updated” plan I seem to be advocating. First, developers can’t all be expected to keep all their software compatible with ancient machines. Yes, sure, Microsoft and Adobe could, but it’s beyond the scope of all the little indy developers out there. Second, performance will undoubtedly be pretty poor on the oldest machines. Not all the advanced features will run on them. But even with these restrictions, Apple could certainly establish a baseline. All the applications that ship with the machines could be kept up to date. On Macs, that would provide a nice suite of tools for users of older machines. And updating and hardening Safari would provide a solid, safe baseline for users of older machines. The state of Apple support Apple doesn’t explicitly state its end-of-life policy for devices. When a new OS is released, it will list devices supported. You can derive from the supported list a secondary list of those devices left behind. Apple does maintain an information page detailing Apple security updates. As of today (end of September, 2021), Apple is still issuing security updates for MacOS Catalina. That means that three of the four machines I took out of service can still be updated — but they don’t run Big Sur or Monterey, and Apple won’t say when Catalina security updates will stop. My fourth newly out-of-service machine, the 2011 Mac mini, can’t be updated beyond High Sierra. Apple’s last High Sierra security patch was in 2020, and the company gives no indication whether (a) there are any known but unpatched security flaws in High Sierra, and (b) whether it ever intends to issue future patches. In fact, this lack of transparency is policy. On that same Security Updates page, Apple says, “For the protection of our customers, Apple doesn’t disclose, discuss, or confirm security issues until an investigation has occurred and patches or releases are generally available.” That’s… helpful. NOT. Especially for users of older machines. But this isn’t just about my four computers. I took a quick look on eBay and found a lot of older machines for sale. This one is just one example: As you can see, it’s an old 2008 MacBook Pro. While it might not be something the typical ZDNet reader is likely to buy, someone on a limited budget in need of a computer might well decide to spend $66 plus $17.14 shipping to land a MacBook Pro. This low-cost machine already has 12 bids and as of the time I took the screenshot, it had two days left to go. But, according to the site Apple History, the 2008 MacBook Pro maxes out at 10.10.4. That’s OS X Yosemite, an operating system that came out in October 2014 and received its last major update in August 2015. According to Apple’s Security Updates page, the last security update for Yosemite was in 2017 — four years ago. The last time Safari was updated for Yosemite was also four years ago. This is what I’m talking about. There is no reason that Apple, a company that brought in nearly $90 billion (with a B) in revenue last quarter, couldn’t keep churning out security updates for these older machines. Time for the big vendors to step up Those machines are out there, people are using them, and it’s well within Apple’s power to keep those people safe. So why don’t they? Or a better question would be, Apple, when will you step up? This article has been mostly focused on Macs, but phones need the same attention. I also call on companies like Samsung to keep older devices up to date.
Samsung also had a record quarter last quarter, pulling in KRW 63.67 trillion ($54B USD) in sales and KRW 12.57 trillion ($10B USD) in operating profit. With $10 billion in operating profit for just one quarter, do we seriously think Samsung can’t issue updates for all those old Android phones it sold? But it doesn’t. Many of those phones haven’t gotten updates since after just a year or two after they were sold. Android is a cesspool for malware, which Samsung is essentially enabling by its inaction in providing security updates. As I said before, there is a line somewhere between the individual developer like me, and companies like Apple and Samsung who are rolling in billions of dollars in profits. I don’t expect boutique developers to handle the load of back-facing security updates. But the big players? Not doing so is irresponsible. There are millions of those machines out there, still in use. All those machines are actively vulnerable to malware and other security threats. Worse, those machines can become patient zero devices, spreading malware to other machines on their networks. So it’s not just about updating old machines to keep their users safe. It’s about updating old machines to keep us all safe. So, the next time you see Apple give a long song and dance about how enviromentally responsible they are, how much they’re moving towards sustainability, and how many robots they’ve built that can disassemble their old electronics, keep in mind that a minor investment could have kept millions of old computers and phones out of landfills, and made them available to lower-income users who need them.
What about you? Do you have a stack of old gear you can’t give responsibly give away, but also don’t want to toss out? Do you think Apple and Samsung have been dropping the ball in not taking responsibility for older security updates? Let us know in the comments below.You can follow my day-to-day project updates on social media. Be sure to follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV. More
Internet of Things
Samsung Spotlights Next-generation IoT Innovations for Retailers at National Retail Federation’s BIG Show 2017
That’s Fantasy! The World’s First Stone Shines And Leads You to The Right Way
LG Pushes Smart Home Appliances To Another Dimension With ‘Deep Learning’ Technology
The Port of Hamburg Embarks on IoT: Air Quality Measurement with Sensors