in

Governments need to beef up cyberdefense for the AI era – and get back to the basics

Virojt Changyencham/Getty Images

Governments will likely want to take a more cautionary path in adopting artificial intelligence (AI), especially generative AI (gen AI) as they are largely tasked with handling their population’s personal data. This must also include beefing up their cyberdefense as AI technology continues to evolve and that means it’s time to revisit the fundamentals. 

Organizations from both private and public sectors are concerned about security and ethics in the adoption of gen AI, but the latter have higher expectations on these issues, Capgemini’s Asia-Pacific CEO Olaf Pietschner said in a video interview.

Also: AI risks are everywhere – and now MIT is adding them all to one database

Governments are more risk-averse and, by implication, have higher standards around the governance and guardrails that are needed for gen AI, Pietschner said. They need to provide transparency in how decisions are made, but that requires AI-powered processes to have a level of explainability, he said.

Hence, public sector organizations have a lower tolerance for issues such as hallucinations and false and inaccurate information generated by AI models, he added.

It puts the focus on the foundation of a modern security architecture, said Frank Briguglio, public sector identity security strategist for identity and access management vendor, SailPoint Technologies.

When asked what changes in security challenges AI adoption has meant for the public sector, Briguglio pointed to a greater need to protect data and insert the controls needed to ensure it is not exposed to AI services scraping the internet for training data. 

Also: Can governments turn AI safety talk into action?

In particular, the management of online identities needs a paradigm shift, said Eduarda Camacho, COO of identity management security vendor, CyberArk. She added that it is no longer sufficient to use multifactor authentication or depend on native security tools from cloud service providers. 

Furthermore, it is also inadequate to apply stronger protection only for privileged accounts, Camacho said in an interview. This is especially pertinent with the emergence of gen AI and along with it deepfakes, which have made it more complicated to establish identities, she added. 

Also: Most people worry about deepfakes – and overestimate their ability to spot them

Like Camacho, Briguglio espouses the merits of an identity-centric approach, which he said calls for organizations to know where all their data resides and to classify the data so it can be protected accordingly, both from a privacy and security perspective.

They need to be able to, in real time, apply the policies to machines as well, which will have access to data, too, he said in a video interview. Ultimately, highlighting the role of zero trust, where every attempt to access a network or data is assumed to be hostile and can potentially compromise corporate systems, he said. 

Attributes or policies that grant access need to be accurately verified and governed, and business users need to have confidence in these attributes. The same principles apply to data and organizations that need to know where their data resides, how it is protected, and who has access to it, Briguglio noted. 

Also: IT leaders worry the rush to adopt Gen AI may have tech infrastructure repercussions

He added that identities should be revalidated across the workflow or data flow, where the authenticity of the credential is reevaluated as it is used to access or transfer data, including who the data is transferred to.

It underscores the need for companies to establish a clear identity management framework, which today remains highly fragmented, Camacho said. Managing access should not differ based simply on a user’s role, she said, urging businesses to invest in a strategy that assumes every identity in their organization is privileged.  

Assume every identity can be compromised and the advent of gen AI will only heighten this, she added. Organizations can stay ahead with a robust security policy and implement the necessary internal change management and training, she noted. 

Also: Business leaders are losing faith in IT, according to this IBM study. Here’s why

This is critical for the public sector, especially as more governments begin to roll out gen AI tools in their work environment.

In fact, 80% of organizations in government and the public sector have boosted their investment in gen AI over the past year, according to a Capgemini survey that polled 1,100 executives worldwide. Some 74% describe the technology as transformative in helping drive revenue and innovation, with 68% already working on some gen AI pilots. Just 2%, though, have enabled gen AI capabilities in most or all of their functions or locations. 

Also: AI governance and clear roadmap lacking across enterprise adoption

While 98% of organizations in the sector permit their employees to use gen AI in some capacity, 64% have guardrails in place to manage such use. Another 28% limit such use to a select group of employees, the Capgemini study notes, and 46% are developing guidelines on the responsible use of gen AI. 

<!–>

However, when asked about their concerns about ethical AI, 74% of public sector organizations pointed to a lack of confidence that gen AI tools are fair, and 56% expressed worries that bias in gen AI models could result in embarrassing results when used by customers. Another 48% highlighted the lack of clarity on the underlying data used to train gen AI applications. 

Focus on data security and governance

As it is, the focus on data security has heightened as more government services go digital, pushing up the risk of exposure to online threats. 

Singapore’s Ministry of Digital Development and Information (MDDI) last month revealed that there were 201 government-related data incidents in its fiscal year 2023, up from 182 reported the year before. The ministry attributed the increase to higher data use as more government services are digitalized for citizens and businesses. 

Furthermore, more government officials are now aware of the need to report incidents, which MDDI said could have contributed to the increase in data incidents. 

Also: AI gold rush makes basic data security hygiene critical

In its annual update about efforts the Singapore public sector had undertaken to protect personal data, MDDI said 24 initiatives were implemented over the past year between April 2023 and March 2024. These included a new feature in the sector’s central privacy toolkit that anonymized 20 million documents and supported more than 20 gen AI use cases in the public sector

Further improvements were made to the government’s data loss protection (DLP) tool, which works to prevent accidental loss of classified or sensitive data from government networks and devices. 

All eligible government systems also now use the central accounts management tool that automatically removes user accounts that are no longer needed, MDDI said. This mitigates the risk of unauthorized access by officers who have left their roles as well as threat actors using dormant accounts to run exploits. 

Also: Safety guidelines provide necessary first layer of data protection in AI gold rush

As the adoption of digital services grows, there are higher risks from the exposure of data, from human oversight or security gaps in technology, Pietschner said. When things go awry, as the CrowdStrike outage exposed, organizations look to drive innovation faster and adopt tech faster, he said. 

It highlights the importance of using up-to-date IT tools and adopting a robust patch management strategy, he explained, noting that unpatched old technology still presents the top risk for businesses. 

Briguglio further added that it also demonstrates the need to adhere to the basics. Security patches and changes to the kernel should not be rolled out without regression testing or first testing them in a sandbox, he said. 

Also: IT leaders worry the rush to adopt Gen AI may have tech infrastructure repercussions

Although a governance framework that will guide organizations on how to respond in the event of a data incident is just as important, Pietschner added. For example, it is essential that public sector organizations are transparent and disclose breaches, so citizens know when their personal data is exposed, he said. 

A governance framework should be implemented for gen AI applications, too, he said. This should include policies to guide employees on their adoption of Gen AI tools. 

However, 63% of organizations in the public sector have yet to decide on a governance framework for software engineering, according to a different Capgemini study that surveyed 1,098 senior executives and 1,092 software professionals globally. 

Despite that, 88% of software professionals in the sector are using at least one gen AI tool that is not officially authorized or supported by their organization. This figure is the highest among all verticals polled in the global study, Capgemini noted. 

It indicates that governance is critical, Pietschner said. If developers use unauthorized gen AI tools, they can inadvertently expose internal data that should be secured, he said. 

He noted that some governments have created customized AI models to add a layer of trust and enable them to monitor its use. This can then ensure employees use only authorized AI tools – protecting the data used. 

Also: Transparency is sorely lacking amid growing AI interest

More importantly, public sector organizations can eliminate any bias or hallucinations in their AI models, he said and the necessary guardrails should be in place to mitigate the risk of these models generating responses that contradict the government’s values or intent. 

He added that a zero-trust strategy is easier to implement in the public sector where there is a higher level of standardization. There are often shared government services and standardized procurement processes, for instance, making it easier to enforce zero-trust policies. 

In July, Singapore announced plans to release technical guidelines and offer “practical measures” to bolster the security of AI tools and systems. The voluntary guidelines aim to provide a reference for cybersecurity professionals looking to improve the security of their AI tools and can be adopted alongside existing security processes implemented to address potential risks in AI systems, the government stated. 

Also: How Singapore is creating more inclusive AI

Gen AI is evolving rapidly and everyone has yet to fully understand the true power of the technology and how it can be used, Briguglio mentioned. It calls for organizations, including those in the public sector who plan to use gen AI in their decision-making process to ensure there is some human oversight and governance to manage access and sensitive data. 

“As we build and mature these systems, we need to be confident the controls we place around gen AI are adequate for what we’re trying to protect,” he said. “We need to remember the basics.”

Used well, though, AI can work with humans to better defend against adversaries applying the same AI tools in their attacks, said Eric Trexler, Pala Alto Network’s US public sector business lead.

Also: AI is changing cybersecurity and businesses must wake up to the threat

Mistakes can happen, so the right checks and balances are needed. When done right AI will help organizations keep up with the velocity and volume of online threats, Trexler detailed in a video interview. 

Recalling his prior experience running a team that carried out malware analysis, he said automation provided the speed to keep up with the adversaries. “We just don’t have enough humans and some tasks the machines do better,” he noted. 

AI tools, including gen AI, can help “find the needle in a haystack”, which humans would struggle to do when the volume of security events and alerts can run into the millions each day, he said. AI can look for markers or indicators across an array of multifaceted systems collecting data and create a summary of events, which humans then can review, he added.

Also: Artificial intelligence, real anxiety: Why we can’t stop worrying and love AI

Trexler, too, stressed the importance of recognizing that things still can go wrong and establishing the necessary framework including governance, policies, and playbooks to mitigate such risks. 

Learn a new language with a lifetime Babbel subscription, now 76% off

LG’s new Mini LED TV beat out my G2 OLED in 3 major ways – and it’s $300 off for Labor Day