For models that don't support underlying in-service memory capabilities, the Microsoft Agent Framework allows you to use third party solutions to store agent chat history.

How to Store Chat History Using External Storage in Microsoft Agent Framework

Chat history and memory allow agents to maintain context across conversations and remember user preferences, which enables agents to provide personalized experiences. Using the Microsoft Agent Framework, we can use in-memory chat message stores, persistent databases, and specialized memory services to cater to a variety of different use cases. In this article, I’ll show you a simple example of how we can use an Azure Cosmos DB Vector store to store conversations we have with an agent, and how we can retrieve conversations so that our agents can maintain context. ...

January 12, 2026 · 14 min · Will Velida
With GitHub Models, we can test LLMs in Agents for free, rather than paying for Azure Foundry

Using GitHub Models with the Microsoft Agent Framework

Almost a year ago, I wrote a blog post on how you could use GitHub Models with Semantic Kernel applications for dev and test purposes. Now that the Microsoft Agent Framework is available, I thought I’d create an updated article on how you can use GitHub Models with the new framework, so that you don’t have to provision Azure Foundry and pay for using LLM usage to build agents. What is the Microsoft Agent Framework? It’s an open-source kit for building AI Agents and agentic workflows in Python and C#. The Agent Framework is an extension of both Semantic Kernel and AutoGen projects, and it provides a unified approach for building agents. Both the Semantic Kernel and AutoGen teams are working together to build the Microsoft Agent Framework. ...

January 9, 2026 · 8 min · Will Velida
Building Remote MCP Servers with .NET and Azure Container Apps

Building Remote MCP Servers with .NET and Azure Container Apps

A couple of months ago, I wrote a blog post on how you can create Model Context Protocol (MCP) servers using C#. Using a basic API, I was able to create a MCP server that allowed me to call Australian Football League (AFL) data and supply that as context to LLMs so I can ask it question about AFL results, teams, stats etc. using that API. That blog post talked about how we can use MCP servers that run locally on our machines using stdio transport. In this article, I’ll talk about how we can use Server-Sent Events (SSE) transport to build remote MCP servers that we can host on Azure Container Apps. ...

June 20, 2025 · 10 min · Will Velida
How Tracing works in Azure AI Foundry Agents

How Tracing works in Azure AI Foundry Agents

Determining how Azure AI Foundry Agents makes decisions is important for troubleshooting and debugging purposes. However, it can get a little complicated when our agents perform complex workflows. Our agents could perform numerous executions, making it difficult to track decisions made by all or them, or some agents may invoke tools, that invoke other tools, which invoke more tools! (And so on and so forth). Tracing our agents helps us see the inputs and outputs involved in a particular agent run, as well as the order in which those agents were invoked. In this blog post, I’ll talk about how tracing agents works, how we can do some simple tracing using the Azure AI Foundry Agents playground, and how we can implement tracing in our pro-code agents using OpenTelemetry. ...

June 4, 2025 · 9 min · Will Velida
How to build Azure AI Agents with Azure AI Agent Service

Building AI Agents with Azure AI Agent Service

As the technology advances, Generative AI models are becoming powerful enough to operate autonomously to automate tasks. This is improvement on being able to perform simple tasks in “chat” like applications. This allows us to build AI Agents, which are applications that can use generative AI models with contextual data to automate tasks based on user input and the context that they can perceive. In this article, I’ll talk about how we can build AI Agents using Azure AI Agent service. ...

May 17, 2025 · 16 min · Will Velida