IDC estimates that companies invested $16B in generative AI solutions in 2023, and that they’ll invest $140B in 2027 — a compound annual growth rate (CAGR) of 70%, according to the Trends in AI for CRM research from Salesforce. Whether generating content and communications or optimizing processes, teams are discovering the best ways to incorporate AI in the flow of their work as investments ramp up.
No fewer than 92% of sales, service, marketing, or commerce teams are at least considering AI investments. AI is the top priority for CEOs and — quite often — a high-priority discussion in the boardroom. And that discussion goes beyond technology, encompassing the workplace skills and policies needed in the AI era.
Also: AI can mean big business benefits. But these obstacles must be cleared first
Employees recognize the transformative impact of AI on their careers, but their employers are largely falling behind in empowering them for success. Fifty-six percent of desk workers believe generative AI will transform their roles, but only 21% say their company has provided clear policies around its use.
To better understand the nature of AI discussions and priorities in the boardroom, DisrupTV weekly podcast — co-hosted by Constellation Research CEO and founder Ray Wang and yours truly — dove into the crucial role of responsible AI governance in a rapidly evolving technological landscape.
<!–>
Our discussion featured three prominent executives from the public and private sector with senior leadership and boardroom experience: Miriam Vogel, president and CEO of Equal AI, a non-profit dedicated to promoting responsible AI governance; Teresa Carlson, a venture capitalist with General Catalyst and a private sector investor with extensive experience in technology and government; and Dr. David Bray, a champion of positive change and a leading voice on technology’s impact on governments and businesses alike with the non-partisan Stimson Center.
Miriam Vogel’s insights
Miriam Vogel, a leading voice in responsible AI governance, underscored the importance of inclusivity and equity in AI development and deployment. She emphasized the need to ensure that AI benefits all communities, not just a select few. She advocated for AI literacy, urging individuals to understand the capabilities and limitations of AI tools. Vogel also highlighted the importance of engaging diverse populations in AI development, ensuring that AI systems are representative of the communities they serve.
Also: Can AI even be open source? It’s complicated
Vogel emphasized the need for a global consensus on AI definitions and standards, particularly in the absence of clear international frameworks. She stressed the importance of industry, government, and society working together to establish best practices for responsible AI governance. She highlighted the work of Equal AI in promoting responsible AI governance through programs for business leaders, policymakers, and lawyers.
Teresa Carlson’s vision
Teresa Carlson shared her perspective on the exciting opportunities and challenges presented by generative AI. She emphasized the need for a risk-based approach to AI, acknowledging the potential for cybersecurity threats. Carlson highlighted the rapid pace of technological innovation and the need for technologies that can keep pace with evolving threats.
Also: These experts believe AI can help us win the cybersecurity battle
Carlson emphasized the importance of global resilience in the face of geopolitical shifts and the increasing focus on national technological sovereignty. She highlighted the growing investment in AI across various sectors, including defense, healthcare, and legal. She emphasized the need for companies to move fast and responsibly, taking advantage of the opportunities presented by AI while ensuring ethical and inclusive development.
David Bray’s perspective
Dr. Bray emphasized the need for a pragmatic approach to AI, urging companies and governments to prioritize the business case before deploying AI technology. He highlighted the risks that organizations face when adopting AI without fully understanding its implications, potentially leading to unintended consequences.
He stressed the importance of respecting data and recognizing the diverse needs of different communities, particularly in free and open societies like the US and similar nations. Dr. Bray also warned of the potential for AI to be misused for malicious purposes, such as creating misinformation at an unprecedented scale. He advocated for private-sector solutions to counter these threats, emphasizing the need for tools that help people discern accurate information from fabricated content. He highlighted the importance of data governance and the need to move beyond the outdated “data is the new oil” metaphor, suggesting a shift toward data cooperatives and communities.
[embedded content]
The importance of collaboration: Our podcast underscored the importance of collaboration among industry, government, and non-profit organizations in shaping the future of AI. The panelists emphasized the need for a multi-stakeholder approach to address the challenges and opportunities presented by AI. They highlight the importance of public-private partnerships, government funding, and venture capital investment in driving responsible AI innovation.
The future of AI: In summarizing the DisrupTV episode, we observed similar themes in the predictions presented for the future of AI. Each of our three guests emphasized the need for continued education and training to prepare the workforce for a world increasingly shaped by AI. They highlighted the importance of AI literacy, particularly for younger generations, to ensure that AI is used responsibly and effectively. The guests also emphasized the need for a more inclusive approach to AI development, ensuring that all communities benefit from its advancements.
Also: How to run dozens of AI models on your Mac or PC – no third-party cloud needed
Key takeaways: Vogel, Carlson, and Bray provided valuable insights for the strategic boards of businesses and government alike regarding the critical issues surrounding AI governance and its impact on society. They agreed on the need for a pragmatic, inclusive, and responsible approach to AI, ensuring that its benefits are shared by all. They also all agreed on the importance of collaboration, education, and investment in shaping a future where AI serves as a force for good.
This article was co-authored by Dr. David Bray, principal and CEO at LeadDoAdapt (LDA) Ventures.