More stories

  • in

    Virtual-world tech company owner arrested over alleged $45m investment fraud scheme

    Image: Vintage Tone/Shutterstock The owner of several metaverse companies has been arrested over an alleged investment fraud scheme that defrauded more than 10,000 victims of over $45 million. Last week, the US Department of Justice (DoJ) said that Neil Chandran, a Las Vegas resident, was arrested over allegations of fraud. The 50-year-old owns companies operating […] More

  • in

    The British Army is investigating after its Twitter and YouTube accounts were hijacked

    Image: Getty Images/iStockphoto The British Army is investigating after its Twitter and YouTube accounts were both breached. On July 3, as reported by the BBC, Army accounts were taken over and used to promote NFT and cryptocurrency schemes. This included YouTube videos posted with the image of entrepreneur Elon Musk. The British Army’s YouTube account […] More

  • in

    Google: Half of zero-day exploits linked to poor software fixes

    Image: Shutterstock / Gorodenkoff Half of the 18 ‘zero-day’ bugs that were exploited before a patch was publicly available this year could have been prevented if only major software vendors created more thorough patches and did more testing.  That’s the verdict of researchers at Google Project Zero (GPZ), which has so far counted 18 zero-day […] More

  • in

    Microsoft: This Android malware will switch off your Wi-Fi, empty your wallet

    Image: Getty Microsoft has shared its detailed technical analysis of the persistent problem of ‘toll fraud’ apps on Android, which it said remains one of the most prevalent types of Android malware.  Microsoft’s 365 Defender Team points out that ‘toll billing’, or Wireless Application Protocol (WAP) fraud, is more complex than SMS fraud or call […] More

  • in

    FBI and CISA warn: This ransomware is using RDP flaws to break into networks

    Image: Shutterstock / Marjan Apostolovic Several US law enforcement agencies have shone a spotlight on MedusaLocker, one ransomware gang that got busy in the pandemic by hitting healthcare organizations.  MedusaLocker emerged in 2019 and has been a problem ever since, ramping up activity during the early stages of the pandemic to maximize profits.  Special Feature […] More

  • in

    Microsoft warning: This malware that targets Linux just got a big update

    Image: Getty Images/iStockphoto Microsoft says it has spotted “notable updates” to malware targeting Linux servers to install cryptominer malware.  Microsoft has called out recent work from the so-called “8220 gang” group, which has recently been spotted exploiting the critical bug affecting Atlassian Confluence Server and Data Center, tracked as CVE-2022-26134.  “The group has actively updated […] More

  • in

    CISA: Switch to Microsoft Exchange Online 'Modern Auth' before October

    Image: Getty/Hinterhaus Productions It’s finally time for businesses running Exchange Online to switch from Basic Authentication to Modern Authentication before Microsoft disables the former on October 1, 2022, according to the US Cybersecurity and Infrastructure Security Agency. One of the key features that Basic Authentication or “Basic Auth” doesn’t support is multi-factor authentication (MFA), which […] More

  • in

    AI ethics should be hardcoded like security by design

    Businesses need to think about ethics from ground zero when they begin conceptualising and developing artificial intelligence (AI) products. This will help ensure AI tools can be implemented responsibly and without bias. The same approach already is deemed essential to cybersecurity products, where a “security by design” development principle will drive the need to assess risks and hardcode security from the start, so piecemeal patchwork and costly retrofitting can be avoided at a later stage. This mindset now should be applied to the development of AI products, said Kathy Baxter, principal architect for Salesforce.com’s ethical AI practice, who underscored the need for organisations to meet fundamental development standards with AI ethics. She noted that there were many lessons to be learned from the cybersecurity industry, which had evolved in the past decades since the first malware surfaced in the 1980s. For a sector that did not even exist before that, cybersecurity since had transformed the way companies protected their systems, with emphasis on identifying risks from the start and developing basic standards and regulations that should be implemented. As a result, most organisations today would have put in place basic security standards that all stakeholders including employees should observe, Baxter said in an interview with ZDNet. All new hires at Salesforce.com, for instance, have to undergo an orientation process where the company outlines what is expected of employees in terms of cybersecurity practices, such as adopting a strong password and using a VPN. The same applied to ethics, she said, adding that there was an internal team dedicated to driving this within the company. There also were resources to help employees assess whether a task or service should be carried out based on the company’s guidelines on ethics and understand where the red lines were, Baxter said. Salesforce.com’s AI-powered Einstein Vision, for example, can never be used for facial recognition, so any sales member who is not aware of this and tries to sell the product for such deployment will be doing so in violation of the company’s policies. And just as cybersecurity practices were regularly reviewed and revised to keep pace with the changing threat landscape, the same should be applied to polices related to AI ethics, she said. This was critical as societies and cultures changed over time, where values once deemed relevant 10 years ago might no longer be aligned with views a country’s population held today, she noted. AI products needed to reflect this. Data a key barrier to addressing AI biasWhile policies could mitigate some risks of bias in AI, there remained other challenges–in particular, access to data. A lack of volume or variety could result in an inaccurate representation of an industry or segment. This was a significant challenge in the healthcare sector, particularly in countries such as the US where there were no socialised medicine or government-run healthcare systems, Baxter said. When AI models were trained by limited datasets based on a narrow subset of a population, it could impact the delivery of healthcare services and ability to detect diseases for certain groups of people.Salesforce.com, which cannot access or use its customers’ data to train its own AI models, will plug the gaps by purchasing from external sources such as linguistic data, which is used to train its chatbots, as well as tapping synthetic data. Asked about the role regulators played in driving AI ethics, Baxter said mandating the use of specific metrics could be harmful as there still were many questions around the definition of “explainable AI” and how it should be implemented. The Salesforce.com executive is a member of Singapore’s advisory council on the ethical use of AI and data, which advises the government on policies and governance related to the use of data-driven technologies in the private sector.Pointing to her experience on the council, Baxter said its members realised quickly that defining “fairness” alone was complicated, with more than 200 statistical definitions. Furthermore, what was fair for one group sometimes inevitably would be less fair for another, she noted. Defining “explainability” also was complex where even machine learning experts could misinterpret how a model worked based on pre-defined explanations, she said. Set policies or regulations should be easily understood by anyone who used AI-powered data and across all sectors, such as field agents or social workers. Realising that such issues were complex, Baxter said the Singapore council determined it would be more effective to establish a framework and guidelines, including toolkits, to help AI adopters understand its impact and be transparent with their use of AI. Singapore last month released a toolkit, called A.I. Verify, that it said would enable businesses to demonstrate their “objective and verifiable” use of AI. The move was part of the government’s efforts to drive transparency in AI deployments through technical and process checks.Baxter urged the need to dispel the misconception that AI systems were by default fair simply because they were machines and, hence, devoid of bias. Organisations and governments must invest the efforts to ensure AI benefits were equally distributed and its application met certain criteria of responsible AI, she said. RELATED COVERAGE More