Follow ZDNET: Add us as a preferred source on Google.
ZDNET’s key takeaways
- The regulatory landscape is evolving and creating new demands.
- Business leaders can use compliance to guide AI innovations.
- Internal and external partners can help organizations deliver results.
The AI gold rush has put new pressure on governments and other public agencies. As enterprises look to gain a competitive advantage from emerging technologies, governing bodies are eager to implement rules and regulations that protect individuals and their data.
The most high-profile AI legislation is the EU’s AI Act. However, global law firm Bird & Bird has developed an AI Horizon Tracker that analyzes 22 jurisdictions and presents a broad spectrum of regional approaches.
Also: 5 ways Lenovo’s AI strategy can deliver real results for you too
Digital and business leaders must find ways to comply with these rules. But while compliance can be a burden, it doesn’t have to be a hindrance — and these five business leaders provide five ways you can use governance to help guide your AI explorations.
1. Explore within constraints
Art Hu, global CIO at Lenovo, said there’s no single answer to the question of how to balance AI innovation and governance effectively.
“Responses in industries, sectors, and governments will vary, sometimes wildly, in terms of what your responsibilities are,” he said.
Hu told ZDNET that, as a general rule, business leaders should pay attention to upcoming rules and regulations that must be adhered to in an age of AI.
Also: 5 ways to prevent your AI strategy from going bust
“The penalty for getting things wrong is quite hot right now. You have significant tail risk in a way that you didn’t before,” he said, before suggesting that executives should focus on carefully guided AI explorations.
“I think it goes back to the toolbox that you can build and how you encourage innovation, typically, through whitelists and some kind of sandboxing, where you say, explore, but within a constraint, because you don’t want explorations to generate one of these long-tail, adverse outcomes that you’re stuck with.”
2. Work alongside partners
Paul Neville, director of digital, data, and technology at UK agency The Pensions Regulator (TPR), suggested business leaders must recognize that AI presents an epochal shift, not just a refresh of the way organizations run technology today.
“I have said this in a few conferences, but I’ll repeat it: We assume that the future is just automating what we do today, but a bit quicker,” he said.
“First, I don’t think that approach is particularly visionary. And second, it won’t get us beyond the problems of today. Visionary leaders must paint a picture of how things could be different.”
Neville told ZDNET that pioneering executives help other professionals imagine a better future: “If you think AI is just going to be a bit quicker than today, you won’t get what you need out of it. I think there’s potentially fundamentally different working patterns and opportunities.”
Also: This company’s AI success was built on 5 essential steps – see how they work for you
At TPR, Neville’s team works with the UK government to understand how new rules and regulations can guide effective AI explorations.
“There’s a new piece of legislation, a new pensions bill, and there’s quite a lot of technology that will be needed and new customer experiences,” he said.
“We’re working very closely with the government to make sure that we’re delivering modern digital services, and that legislation will help us do that. AI can help us create something much more interactive, interesting, iterative, and visual at the same time. That’s the opportunity.”
3. Manage bespoke cases
Martin Hardy, cyber portfolio and architecture director at Royal Mail, said he believes that businesses can use compliance as a route to explore AI and manage risk.
“In cyber, we do a lot of threat-modelling and a lot of it’s quite generic and low-level, and where my security architects add value is in those bespoke niche cases,” he said.
“Having an AI do 80% of the work, so you’re no longer working from a blank document, and we can say, ‘Oh yeah, you need to put this security control in place,’ means we can then give our security professionals the time to focus on what could happen, such as a particular threat actor that we’re worried about in our sector, and that approach really adds value.”
Also: Dreading AI job cuts? 5 ways to future-proof your career – before it’s too late
Hardy told ZDNET that business leaders must also recognize the risk of relying on AI and data-heavy technologies. The message is clear: Use AI but proceed with care.
“By putting all that data into your systems, if an AI model is breached, then an attack has a blueprint about where all your weaknesses are,” he said.
“So, it’s a Catch-22 situation — if you don’t use AI, other people will, and you’ll fall behind. If you do use it, and you’re not careful, you could be part of the crowd that gets stung by an attack.”
4. Foster key relationships
Ian Ruffle, head of data and insight at UK auto breakdown specialist RAC, said that managing the balance between governance and innovation is all about internal culture.
“Everything comes back to people,” he said. “I think success is about applying the right technology, but the appropriate use of that technology as well — and that’s all about having the right people.”
Ruffle told ZDNET that senior leaders can’t be expected to be aware of every possible threat or risk at a granular level, which is why establishing a strong culture is paramount, particularly when working alongside trusted internal specialists.
Also: Gemini vs. Copilot: I compared the AI tools on 7 everyday tasks, and there’s a clear winner
“You’ve got to empower people to care about the individuals that this piece of data is representing,” he said.
“That’s a culture thing for me. Fostering relationships with your data protection officer and information security teams is almost more important in the long run than forging ahead and using the most modern technology.”
In short, balancing governance and innovation is tricky — and keeping humans in the loop is critical to success.
“You do have to walk a tightrope,” said Ruffle. “There’s a reason why I think organizations need humanness to think about these problems effectively.”
5. Ask crucial questions
Erik Mayer, transformation chief clinical information officer at Imperial College London and Imperial College Healthcare NHS Trust, said professionals who use data for AI projects must be careful to ensure the work they undertake to comply with governance doesn’t create new issues: “If you over-clean data, you’re probably going to bias the AI. That’s the problem.”
To overcome this challenge, Mayer told ZDNET his team maintains regular conversations with regulatory authorities, focused on generating answers to key questions. “What are the KPIs you need around a data set to support the regulatory approval of AI to ensure that it’s going to work in the way it’s intended when you put it into the real world? What was the quality of the data? How many duplicates, how many missing values? What’s the actual data definition?”
Also: Cloud-native computing is poised to explode, thanks to AI inference work
The lesson for other digital leaders is that attempts to clean data for new projects could unintentionally remove variables that would be useful in the future. Mayer advised other professionals to take proactive steps.
“Ultimately, you want the rawest form of data. However, when you have to clean it or transform it, you must know exactly how you’ve transformed and documented it,” he said.
“That’s the fundamental element. That is the piece we’ve got to get absolutely right. People must consider how they can say, ‘Yes, this is safe to implement.’ And then long-term success is about ongoing validation.”
Artificial Intelligence
Source: Networking - zdnet.com
