in

Microsoft’s new AI agents won’t just help us code, now they’ll decide what to code

Yuichiro Chino/Moment via Getty

Follow ZDNET: Add us as a preferred source<!–> on Google.


ZDNET’s key takeaways

  • Microsoft is moving beyond copilots to fully autonomous agents.
  • Foundry and MCP let agents assemble solutions using 1,400 tools.
  • IQ services aim to give agents true context and understanding.

At Microsoft Ignite 2025 on Tuesday, the company is showcasing a wide range of capabilities that are moving into an all-in agentic AI future.

In the companion article to this piece, I wrote about how Microsoft is moving the enterprise towards self-running, self-repairing platforms. But that’s not the whole story.

Also: How Microsoft’s new plan for self-repairing data centers will transform IT roles

That article talks about the operational side of autonomous agents, software that can now monitor, diagnose, and repair itself. But what about software creation? Ignite 2025 also contains announcements that reflect the future where software is assembled, extended, and evolved by autonomous AI agents.

Basically, not only will AI agents help us code, they will also decide what to code and then build those solutions. Boom! Mind blown.

Before we move on to the specific technologies Microsoft announced that make this possible, I want to share a caution. I’ve been using agentic AI via Claude Code and ChatGPT Codex to code, and while the results are nothing short of fantastic, the process is messy as heck.

Also: I’ve tested free vs. paid AI coding tools – here’s which one I’d actually use

For every working capability I get back from the AI, I’ve had to slog through five or 10 drafts where the AI misunderstood the assignments, outright lied about its ability to do what it claimed, ignored instructions, or went completely off the rails. No doubt the AI-assisted coding has helped me save time. But it also showcases that AI agents need rather extensive supervision.

So the idea that perhaps AI agents will supervise other AI agents rings hollow. As we discuss these new capabilities of AI agents to assemble software tools on demand, keep in mind the ever-growing need for qualified human oversight.

OK, now onto the announcements.

Agent 365 gives ‘personhood’ to software

This is weird, but work with me here for a minute. In American law (and this applies to other countries as well), corporations are considered legal persons because they have the rights and responsibilities of a natural person. The analogy isn’t perfect, but it’s worked well enough to keep lawyers in suits for a long time.

Inside an organization, from an IT perspective, humans have always had a different management status than scripts, apps, routines, and other code. Humans are considered “users,” and are tracked with unique identities, permissions, governance, observability, and lifecycle management.

Now, here’s the key phrase from Microsoft: “Microsoft Agent 365 will extend the infrastructure for managing users to agents — helping organizations govern agents responsibly and at scale.”

Also: Microsoft’s new AI agents create your Word, Excel, and PowerPoint projects now

Essentially, Microsoft is turning agents into users, not just chunks of code. In terms of enabling agents to assemble code and deploy it, those agents will be acting as users who have done the same class of tasks historically.

While continually executing software daemons have existed as server features for decades, this is something new. Cron jobs and other traditional continuously running software perform specific predefined deterministic tasks.

But agents in this new Microsoft model are goal-driven rather than task-driven. They have intent, state, knowledge, and context. Unlike daemons that exist under a machine account, agents under Agent 365 will be enumerated, onboarded, offboarded, audited, and permission-scoped, not as cron jobs, but as what are essentially digital workers.

Microsoft Foundry adds MCP tools catalog

To understand this next big swing, you need to understand Model Context Protocol (MCP). This is a standard protocol introduced by Anthropic just about a year ago, and it’s game-changing. Basically, it’s a standard way for AI LLMs and services (think Slack, Google Drive, PostgreSQL, etc.) to talk to each other.

What makes this big is that each LLM or AI implementation doesn’t have to construct a customized API-based connection to each service. As long as the LLM has MCP on its side and the service has MCP on its side, they can communicate. Plus, it’s a two-way thing. In other words, the AI can initiate a request to the service. But services can also send prompts to the AIs and get back results.

Via the standardized interface of MCP, AIs and services have become LEGO building blocks, where each can snap into the other. This totally supports our thesis of self-building tools. Because now that AIs have a mechanism to snap-click to services in a predictable way, agents can do so when they need to (subject to permissions, entitlements, and such).

Also: What is Model Context Protocol? The emerging standard bridging AI and data, explained

All that brings us back to Microsoft’s announcement. Microsoft says Foundry will, “Enable developers to enrich agents with real-time business context, multimodal capabilities and custom business logic through a unified catalog of Model Context Protocol (MCP) tools built with security and governance in mind.”

That catalog contains a whopping 1,400 systems (like SAP, Salesforce, and HubSpot) right out of the gate. Microsoft is also providing MCP extensibility, where developers can enable any API or function to work as an MCP server through Foundry.

Now, let’s think back to the autonomous creation thesis we introduced at the beginning of this article. AI agents running on Microsoft environments won’t be writing their own code from scratch. But they will be empowered to assemble tools from an ever-growing catalog of MCP servers.

Essentially, Microsoft is providing a roadmap for AI agents to become mashup artists.

Enabling agents to understand context

Let’s recap. Microsoft is making it possible for agents to be treated as people-like digital workers (as worrying and freaky as that sounds). Microsoft is also enabling agents to snap-assemble tools via MCP instead of coding APIs or coding everything from scratch.

The next logical question becomes, “How can AI agents intelligently assemble solutions if they don’t genuinely understand the business environment, the meaning of the data they see, or what happened last time?”

That’s where the next Microsoft announcement comes in. For agents to successfully build and deploy solutions to match defined intents, they can’t treat each task like a brand new operation. Agents need shared context, semantic understanding, and long-term memory.

Also: Microsoft is packing more AI into Windows, ready or not – here’s what’s new

This is really powerful. Context answers questions like, “What is this for?” and “How does it fit with other things?” Memory answers questions like, “Has anyone tried this before?” and “Did that work?” Semantics answers questions like, “What does customer, invoice, priority, or owner actually mean in this company?”

To solve this, Microsoft has introduced Work IQ, Fabric IQ, and Foundry IQ. For the record, Work IQ is also the brand name for the makers of the IQ Vice, an enormously flexible and capable workshop tool. We’re talking about a different thing here. Beyond cool workshop tools, here’s what Microsoft’s three new software tools do for the agentic build stack.

  • Work IQ: Gives agents awareness of what workers are doing, and how work flows through Microsoft 365.
  • Fabric IQ: Gives agents business-level data meaning using Microsoft Fabric’s semantic models.
  • Foundry IQ: Gives agents unified knowledge access and long-term recall across multiple data sources.

Microsoft’s IQ announcements are aimed at going beyond giving agents access to data. They’re building a substrate that enables the shared organizational understanding required to make accurate, context-aware decisions. With Work IQ, Fabric IQ, and Foundry IQ, Microsoft is signaling that future enterprise software won’t simply execute. It will understand, adapt, and build in ways that previously required human involvement.

I feel good. Don’t you feel good? It’s not like this is the start of Skynet or anything. Right? Right?

What’s it all mean?

So let’s be clear. These announcements don’t mean that full self-building agentic software environments exist today. But Microsoft is showing that it has identified the missing architectural ingredients, and most are available for preview.

Undoubtedly, progress will be incremental, messy, and require considerable human supervision. But we’re starting to see the roadmap showing how AI will transform enterprise IT environments going forward.

Now it’s your turn. Do you think Microsoft’s move toward agent-driven software will change the way applications are built and operated inside organizations? Do you think agents assembling solutions from existing tools is a practical near-term step, or still more of a long-term aspiration? And what level of human oversight do you believe will be required as these systems mature? Share your thoughts in the comments below.


You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, on Bluesky at @DavidGewirtz.com, and on YouTube at YouTube.com/DavidGewirtzTV.

–>


Source: Information Technologies - zdnet.com

My favorite holiday hosting gadget is 46% off right now

Google’s Antigravity puts coding productivity before AI hype – and the result is astonishing