More stories

  • in

    AI ethics should be hardcoded like security by design

    Businesses need to think about ethics from ground zero when they begin conceptualising and developing artificial intelligence (AI) products. This will help ensure AI tools can be implemented responsibly and without bias. The same approach already is deemed essential to cybersecurity products, where a “security by design” development principle will drive the need to assess risks and hardcode security from the start, so piecemeal patchwork and costly retrofitting can be avoided at a later stage. This mindset now should be applied to the development of AI products, said Kathy Baxter, principal architect for Salesforce.com’s ethical AI practice, who underscored the need for organisations to meet fundamental development standards with AI ethics. She noted that there were many lessons to be learned from the cybersecurity industry, which had evolved in the past decades since the first malware surfaced in the 1980s. For a sector that did not even exist before that, cybersecurity since had transformed the way companies protected their systems, with emphasis on identifying risks from the start and developing basic standards and regulations that should be implemented. As a result, most organisations today would have put in place basic security standards that all stakeholders including employees should observe, Baxter said in an interview with ZDNet. All new hires at Salesforce.com, for instance, have to undergo an orientation process where the company outlines what is expected of employees in terms of cybersecurity practices, such as adopting a strong password and using a VPN. The same applied to ethics, she said, adding that there was an internal team dedicated to driving this within the company. There also were resources to help employees assess whether a task or service should be carried out based on the company’s guidelines on ethics and understand where the red lines were, Baxter said. Salesforce.com’s AI-powered Einstein Vision, for example, can never be used for facial recognition, so any sales member who is not aware of this and tries to sell the product for such deployment will be doing so in violation of the company’s policies. And just as cybersecurity practices were regularly reviewed and revised to keep pace with the changing threat landscape, the same should be applied to polices related to AI ethics, she said. This was critical as societies and cultures changed over time, where values once deemed relevant 10 years ago might no longer be aligned with views a country’s population held today, she noted. AI products needed to reflect this. Data a key barrier to addressing AI biasWhile policies could mitigate some risks of bias in AI, there remained other challenges–in particular, access to data. A lack of volume or variety could result in an inaccurate representation of an industry or segment. This was a significant challenge in the healthcare sector, particularly in countries such as the US where there were no socialised medicine or government-run healthcare systems, Baxter said. When AI models were trained by limited datasets based on a narrow subset of a population, it could impact the delivery of healthcare services and ability to detect diseases for certain groups of people.Salesforce.com, which cannot access or use its customers’ data to train its own AI models, will plug the gaps by purchasing from external sources such as linguistic data, which is used to train its chatbots, as well as tapping synthetic data. Asked about the role regulators played in driving AI ethics, Baxter said mandating the use of specific metrics could be harmful as there still were many questions around the definition of “explainable AI” and how it should be implemented. The Salesforce.com executive is a member of Singapore’s advisory council on the ethical use of AI and data, which advises the government on policies and governance related to the use of data-driven technologies in the private sector.Pointing to her experience on the council, Baxter said its members realised quickly that defining “fairness” alone was complicated, with more than 200 statistical definitions. Furthermore, what was fair for one group sometimes inevitably would be less fair for another, she noted. Defining “explainability” also was complex where even machine learning experts could misinterpret how a model worked based on pre-defined explanations, she said. Set policies or regulations should be easily understood by anyone who used AI-powered data and across all sectors, such as field agents or social workers. Realising that such issues were complex, Baxter said the Singapore council determined it would be more effective to establish a framework and guidelines, including toolkits, to help AI adopters understand its impact and be transparent with their use of AI. Singapore last month released a toolkit, called A.I. Verify, that it said would enable businesses to demonstrate their “objective and verifiable” use of AI. The move was part of the government’s efforts to drive transparency in AI deployments through technical and process checks.Baxter urged the need to dispel the misconception that AI systems were by default fair simply because they were machines and, hence, devoid of bias. Organisations and governments must invest the efforts to ensure AI benefits were equally distributed and its application met certain criteria of responsible AI, she said. RELATED COVERAGE More

  • in

    Roe v. Wade fallout: How tech giants and big banks are changing employee policies to adapt

    Technology giants and the enterprise are making their thoughts known on the overturning of Roe v. Wade, with many pledging financial support to employees. The landmark Roe v. Wade case, guaranteeing a constitutional right to an abortion in the United States, was overturned on June 24. 
    Special Feature
    A number of states passed ‘trigger’ laws to ban or restrict the procedure as soon as Roe v. Wade was dissolved, although changes made by states including Louisiana, Texas, and Utah have been temporarily blocked by court judges.  Protesters have taken to the streets. California, Oregon, and Washington have pledged to remain “safe havens” for those seeking a termination.  The upheaval has not gone unnoticed by companies and organizations across the United States. Many firms have not released statements on the overturning of Roe v. Wade, and instead have chosen to issue spokesperson-based statements on request. Here’s how organizations are responding.  Taking a stanceAmazon: The e-commerce giant has pledged to pay employees up to $4,000 in travel expenses for non-life-threatening medical treatments including abortion services.  Apple: Members of staff can use their benefits to cover the costs associated with traveling outside of their home state for healthcare if required. Apple said, “We support our employees’ rights to make their own decisions regarding their reproductive health.” Atlassian: Financial support is on offer for US employees of the Australian software company. Bank of America: Travel expense reimbursement now covers reproductive healthcare including abortions. Canva: Canva, too, will provide help toward travel and accommodation costs to US employees seeking abortion services.  Citigroup: Citigroup said: “We will continue to provide benefits that support our colleagues’ family planning choices wherever we are legally permitted to do so.” Dick’s Sporting Goods: On Twitter, Dick’s Sporting Goods said that in response to the ruling, the company will provide up to $4,000 for employees living in restricted states to “travel to the nearest location where that care is legally available.” Disney: Disney said the company has “processes in place so that an employee who may be unable to access care in one location has affordable coverage for receiving similar levels of care in another location,” including pregnancy-related decisions. “Disney will continue to prioritize the health, safety, and well-being of our team members and their families,” the firm said. Google: Google reminded employees in a memo that its “US benefits plan and health insurance covers out-of-state medical procedures that are not available where an employee lives and works,” adding that employees “can also apply for relocation without justification, and those overseeing this process will be aware of the situation.” JPMorgan Chase & Co: In a June 1 memo, the financial giant said its healthcare plans would be expanded to include travel costs for legal abortions. “Beginning in July, we will expand this benefit to include all covered services that can only be obtained far from your home, which would include legal abortion,” the company said. Lyft: Lyft will cover travel expenses and also intends to expand its legal defense commitment to protect drivers who might be sued for taking passengers to clinics.  Meta: Meta, Facebook’s parent company, will help over out-of-state travel costs for medical care, with the firm “assessing how best to do so given the legal complexities involved.” However, Meta engineer Ambroos Vaes claims that the firm has banned discussion of the situation internally.  “Sheryl Sandberg posted on her Facebook account about what happened today, and even links to her post are removed, out of fear of offending the few employees who might actually agree with the insanity that is going on,” the engineer says.  Microsoft: The Redmond giant says it remains committed to supporting employees in accessing critical healthcare. Microsoft said this “includes our previously announced support for travel expense assistance for medical services covered in our US health plan, when care options, including abortion, are limited in an employee’s home region.” Netflix: Netflix already provides an allowance for full-time US staff who need to travel to seek healthcare. Salesforce: Salesforce said it will continue to offer travel relocation benefits “to ensure employees and their families have access to critical health care services.” Starbucks: In May, Starbucks said in an open letter to staff that “no matter where you live, or what you believe, we will always ensure you have access to quality healthcare.”  “Starbucks healthcare plan a medical travel reimbursement benefit to access an abortion, and coming soon, access to gender-affirming care,” the firm said.  Tesla: Tesla will provide reimbursement for healthcare and abortion services if employees need to travel out of state. Wells Fargo: Travel benefits for medical reasons will be expanded “in accordance with applicable law.”Yelp: Following Texas’ move to introduce trigger laws with strict abortion limits, Yelp said that employees could claim travel expenses for seeking abortion care out of state.  The ramifications for businesses The ruling will have a ripple effect, not only when it comes to reproductive health and family planning, but also on where companies choose to base themselves and where their investments go. Let’s take Texas for example. The state already has one of the most restrictive abortion laws in the US, and while it attempted to enact a trigger ban, this has been temporarily suspended by the court. In March, Texas state Rep. Briscoe Cain sent a cease-and-desist letter to Citigroup for offering to reimburse travel expenses to access the procedure out of state. Cain demanded that “they immediately terminate coverage of elective abortions performed in Texas in its employee-benefit plans,” or face the prospect of new legislation to prevent organizations in the area from doing business with those that offer travel benefits for abortion-related care. Robin Fretwell Wilson, a law professor at the University of Illinois, told NBC News that it is only a matter of time before companies face lawsuits for the ‘violation’ of abortion bans at the state level by offering to cover abortion-related travel expenses. Other states may follow suit, and businesses may also take action themselves by changing settlement plans. Mayor of Kansas City Quinton Lucas said in June that he knew of “a business that has declined to come to Kansas City, Missouri because of the eager action of our state leaders to restrict the rights of women and families.” The Kansas City Council is set to vote on a resolution for city employees to receive a stipend for out-of-state abortion services. There’s also a profit angle to consider. Businesses that choose to offer travel expense reimbursement for family planning and abortion-related healthcare may be able to write off the additional cost as a business expense. However, US Senator Marco Rubio, Florida, has introduced a bill to “prohibit employers from deducting expenses related to their employees’ abortion travel costs or so-called “gender-affirming care” for young children of their employees.”Furthermore, you must consider the workforce itself. Individuals may be unwilling to move or work in states that have chosen to restrict the right to abortion, and this could impact the talent an organization is able to hire or retain. Supporting employeesThe overturning of Roe v. Wade is a sensitive topic and one that will impact the provision of healthcare in the United States for individuals who have become pregnant, consensually or otherwise, potentially for decades to come. With this in mind, executives must tread carefully, no matter their personal views on the topic. While many companies have pledged to assist with health-related travel expenses, considering the extreme sensitivity of the topic, employees forced to give detailed reasons for their travel out-of-state may not take up the offer of financial help. They may also be concerned that information gathered by employers on their decision could be used against them in prosecutions following a subpoena or data request by law enforcement.Employers should examine disclosure policies and consider loosening rules on how much employees need to reveal concerning their healthcare.It’s somewhat dystopian for companies to have any say or control in healthcare in 2022. However, this has become the reality in the US and it is the responsibility of employers to shoulder their responsibility carefully and respectfully.  More

  • in

    Period tracking apps are no longer safe. Delete them

    The battle over abortion and women’s rights to healthcare reached a peak in the United States the moment the landmark Roe v. Wade case was overturned by the Supreme Court. In a number of states, both now and expected in the coming weeks, providing abortion healthcare services will be made illegal, or so restricted they will be almost impossible to obtain. Concerns have now been raised over period tracking apps’ data practices and security, and what their use could mean for those able to get pregnant in the future.     The message is simple: You should stop using them. As warned by Professor Gina Neff, you should “delete every digital trace of any menstrual tracking.”This is why.  More

  • in

    Cybersecurity leaders are anticipating mass resignations within the year – here's why

    Image: Getty Images/Maskot Four in 10 UK cyber leaders say stress could push them to leave their job within the next year, according to a new study. Combined with the ongoing skills crisis, mass resignation could leave many sectors in a precarious situation.  Cybersecurity services company Bridewell surveyed 521 critical national infrastructure decision-makers across multiple […] More

  • in

    This new malware is at the heart of the ransomware ecosystem

    Image: Getty A recently developed form of malware has quickly become a key component in powering ransomware attacks.  The malware, called Bumblebee, has been analysed by cybersecurity researchers at Symantec, who’ve linked it to ransomware operations including Conti, Mountlocker and Quantum.   “Bumblebee’s links to a number of high-profile ransomware operations suggest that it is […] More

  • in

    This sophisticated malware is targeting routers to break into networks

    Image: Shutterstock A newly discovered remote access trojan (RAT) called ZuoRAT has targeted remote workers by exploiting flaws in often unpatched small office/home office (SOHO) routers.  Researchers at Lumen’s Black Lotus Labs threat intelligence unit report that ZuoRAT is part of a highly targeted, sophisticated campaign that has been targeting workers across North America and […] More

  • in

    These are the 25 most dangerous software bugs you need to worry about

    A list detailing the top 25 “most dangerous” software flaws, some of which could allow attackers to take over a system, has been published The list was developed by the Homeland Security Systems Engineering and Development Institute, sponsored by the Cybersecurity and Infrastructure Security Agency (CISA) and operated by MITRE. It uses Common Vulnerabilities and […] More

  • in

    FBI warning: Crooks are using deepfakes to apply for remote tech jobs

    Image: Dzelat/Shutterstock Scammers or criminals are using deepfakes and stolen personally identifiable information during online job interviews for remote roles, according to the FBI.  The use of deepfakes or synthetic audio, image and video content created with AI or machine-learning technologies has been on the radar as a potential phishing threat for several years. ZDNet […] More