These 3 AI themes dominated SXSW – and here’s how they can help you navigate 2025
Sabrina Ortiz/ZDNETAlthough AI technology capable of taking over the world is limited to science fiction literature and movies, existing artificial intelligence is capable of wrongdoing, such as producing hallucinations, training on people’s data, and using other people’s work to create new outputs. How do these shortcomings align with rapid AI adoption?That question was heavily explored at SXSW, with most AI-related sessions either touching upon — or diving deep into — the topic of AI safety. Company leaders from IBM, Meta, Microsoft, and Adobe, to name a few, had insights to share on the future of AI. The consensus? It’s not all doom and gloom. Also: Microsoft is an AGI skeptic, but is there tension with OpenAI?”AI needs a better PR agent; everything we have learned is from sci-fi,” said Hannah Elsakr, founder of Firefly for Enterprise at Adobe. “We think AI is going to take over our lives; that’s not the purpose of it.” Regardless of the panel, the leaders from some of the largest AI tech companies discussed three overarching themes about how safety and responsibility fit in the future of the technology. What they had to say may help put your concerns at ease. 1. The use case matters There is no denying that AI systems are flawed. They often hallucinate and incorporate biases in their responses. As a result, many worry that incorporating AI systems into the workplace will introduce errors in internal processes, negatively impacting employees, clients, and business goals. The key to mitigating this issue is carefully considering which task you delegate to AI. For example, Sarah Bird, CPO of responsible AI at Microsoft, looks for use cases that are a good match for what the technology can do today. More
