in

Why geographical diversity is critical to build effective and safe AI tools

cofotoisme/Getty Images

Organizations cannot afford to pick sides in the global market if they want artificial intelligence (AI) tools to deliver the capabilities they seek. 

Geographical diversity is critical as organizations look to develop AI tools that can be adopted worldwide, according to Andrea Phua, senior director of the national AI group and director of the digital economy office at Singapore’s Ministry of Digital Development and Information (MDDI). 

In response to a question on whether it was “realistic” for Singapore to remain neutral amid the US-China trade strife over AI chip exports, Phua said it would be more powerful and beneficial to have products built by teams based in different global markets that can help fulfill key components in AI. 

During a panel discussion held this week at Fortune’s AI Brainstorm event in Singapore, she said these include the ability to apply context to data models and integrate safety and risk management measures.

Also: Can governments turn AI safety talk into action?

She added that Singapore collaborates with several countries on AI, including the US, China, ASEAN member states, and the United Nations, where Singapore currently chairs the Digital Forum of Small States. 

“We use these platforms to discuss how to govern AI well, what [infrastructure] capacity is needed, and how to learn from each other,” Phua said. She noted that these multilateral discussions help identify safety and security risks that may occur differently in different parts of the world and provide local and regional context to translate data better. 

She added that Singapore has conversations with China on AI governance and policies, and works closely with the US government across the AI ecosystem.

“It is important to invest in international collaborations because the more we understand what is at stake, and know we have friends and partners to guide us through the journey, we’ll be better off for it,” Phua said.

This might prove particularly valuable as generative AI (gen AI) is used increasingly in cyber attacks. 

Also: Generative AI advancements will force companies to think big and move fast

In Singapore, for example, 13% of phishing emails analyzed last year were found to contain AI-generated content, according to the latest Singapore Cyber Landscape 2023 report released this week by the Cyber Security Agency (CSA). 

The government agency responsible for the country’s cybersecurity operations said 4,100 phishing attempts were reported to the Singapore Cyber Emergency Response Team (SingCERT) last year — down 52% from the 8,500 cases in 2022. The 2023 figure, however, is still 30% higher than 2021, CSA noted. 

Also: AI is changing cybersecurity and businesses must wake up to the threat

“This decline bucked a global trend of sharp increases, which were likely fueled by the usage of gen AI chatbots like ChatGPT to facilitate the production of phishing content at scale,” it detailed. 

It also warned that cybersecurity researchers have predicted a rise in the scale and sophistication of phishing attacks, including AI-assisted or -generated phishing email messages that are tailored to the victim and contain additional content, such as deep fake voice messages. 

“The use of Gen AI has brought a new dimension to cyber threats. As AI becomes more accessible and sophisticated, threat actors will also become better at exploiting it,” said CSA’s chief executive and Commissioner of Cybersecurity David Koh.

<!–>

“As it is, AI already poses a formidable challenge for governments around the world [and] cybersecurity professionals would know that we are merely scratching the surface of gen AI’s potential, both for legitimate applications and malicious uses,” Koh said. He pointed to reports of AI-generated content, including deepfakes in video clips and memes, that have been used to sow discord and influence the outcome of national elections.  

Also: Cyberdefense will need AI capabilities to safeguard digital borders

At the same time, there are new opportunities for AI to be tapped to enhance cyber resilience and defense, he said. More specifically, the technology has shown potential in detecting abnormal behavioral patterns and ingesting large volumes of data logs and threat intel, he noted.  

“[This] can enhance incident response and enable us to thwart cyber threats more swiftly and accurately while alleviating the load on our analysts,” Koh said. 

He added that the Singapore government also is working on various efforts to ensure AI is trustworthy, safe, and secure

16 incredibly useful things Alexa can do on Amazon Echo

5 reasons why Chromebooks are the perfect laptop for most people