Azure AI Foundry Agent Service Gets Boost with OpenAI-Powered Deep Research Capabilities

Azure AI Foundry now offers OpenAI-powered agents to automate deep, multi-step web research.

Microsoft Azure

Key Takeaways:

  • Microsoft introduces Deep Research in Azure AI Foundry to bring OpenAI’s autonomous research tools to developers.
  • The service enables creation of multi-agent systems for complex, multi-step research workflows.
  • Deep Research offers transparency, source-grounded insights, and integration with Azure tools.

Microsoft has announced the public preview of Deep Research in Azure AI Foundry, a powerful API and SDK-based service that integrates OpenAI’s advanced agentic research capabilities directly into the Azure ecosystem. This new offering allows developers to build intelligent research agents that can autonomously plan, analyze, and synthesize information at scale.

Earlier this year, OpenAI first launched Deep Research as an agentic tool within ChatGPT that can autonomously perform multi-step research on the internet. It’s designed to handle complex, time-intensive tasks and mimics the work of a human research analyst. Deep Research uses a version of OpenAI’s o3 model, which is optimized for web browsing and data analysis.

“With Deep Research, developers can build agents that deeply plan, analyze, and synthesize information from across the web—automate complex research tasks, generate transparent, auditable outputs, and seamlessly compose multi-step workflows with other tools and agents in Azure AI Foundry,” said Yina Arenas, Vice President of Product, Core AI.

According to Microsoft, Deep Research in Azure AI Foundry Agent Service is an infrastructure rather than a chat UI. It enables developers to create multi-agent systems that trigger tasks, perform research, summarize findings, and distribute the results.

What are the key capabilities of Deep Research?

Microsoft outlined several key features of Deep Research for Azure AI Foundry users. Deep Research offers a best-in-class research model that leverages Bing Search to ground insights with sources. Moreover, it allows developers to program agents that are reusable and can be invoked by apps, workflows, or other agents.

Additionally, Deep Research integrates with Azure Logic Apps, Azure Functions, and other tools to automate tasks like reporting and notifications. It offers full control, transparency, and compliance for research processes.

How does it work?

The o3-deep-research model in Azure AI Foundry is designed to automate complex research tasks by orchestrating a multi-step process. It begins by clarifying the user’s intent using advanced GPT models like GPT-4o and GPT-4.1 to ensure the task is well-defined and relevant. It then uses Bing Search to gather high-quality, up-to-date web data, grounding the research in reliable sources.

The model performs deep analysis and synthesis, reasoning through information step-by-step to produce comprehensive insights. The final output includes citations, reasoning paths, and clarifications. Developers can now build research agents into broader systems, which enables automated workflows like generating reports or presentations and distributing them.

Microsoft notes that pricing for Deep Research (the o3-deep-research model) starts at $10 per million input tokens and $40 per million output tokens. Moreover, the company offers discounted rates for cached input (i.e, $2.50 per 1M tokens). Customers will also need to pay separate charges for GPT-based intent scoping and Bing Search.

Deep Research is currently available in a limited public preview for Azure AI Foundry Agent Service customers. If you’re interested, you can gain early access by signing up on this page.