Multi-agent architecture: Chat.Api with Microsoft Agent Framework, Reporting.Api with GitHub Copilot SDK, connected via Entra Agent Identity.

Building a Multi-Agent System in .NET using the Microsoft Agent Framework, GitHub Copilot SDK, and Entra Agent ID

My side project (Biotrackr) started as a Fitbit data tracker and has gradually evolved into a full multi-agent system. Currently, it tracks my health data from Fitbit including sleep patterns, activity levels, food intake, and weight. I’ve recently built a chat agent that allows me to ask questions about my health data through the UI. However, I wanted to take this a step further and use another agent to build charts and graphs to visualize my data. ...

April 8, 2026 · 13 min · Will Velida
Preventing OWASP ASI07 Insecure Inter-Agent Communication in a .NET AI agent with mutual authentication, signed messages, anti-replay, typed contracts, and protocol pinning.

Preventing Insecure Inter-Agent Communication in AI Agents

Biotrackr is a single-agent system. One agent, twelve tools, one identity. That is an architectural choice that eliminates an entire vulnerability class Insecure Inter-Agent Communication (ASI07). But what happens when the system grows? Imagine Biotrackr evolves into a multi-agent platform: a Data Retrieval Agent that fetches health records, a Health Advisor Agent that provides wellness recommendations based on trends, and an Orchestrator Agent that coordinates them. Suddenly, agents are talking to each other, passing data, delegating tasks, sharing context. Every message between them is a potential attack surface. ...

March 12, 2026 · 29 min · Will Velida
Preventing OWASP ASI06 Memory and Context Poisoning in a .NET AI agent with session isolation, content validation, cache TTLs, and immutable configuration.

Preventing Memory and Context Poisoning in AI Agents

Every time your AI agent saves a conversation, you’re creating a potential attack vector. ASI06 (Memory and Context Poisoning) asks a deceptively simple question: “can previous conversations corrupt future ones?” For my side project (Biotrackr), this is one of the more interesting risks. The chat agent persists conversation history to Cosmos DB, and those persisted conversations become context when a user continues an old chat. A poisoned message from 2 weeks ago could influence today’s analysis. The IMemoryCache used for tool response caching is shared across sessions. A cached response could influence a different session’s results. ...

March 12, 2026 · 22 min · Will Velida
Preventing OWASP ASI10 Rogue Agents in a .NET AI agent with behavioural constraints, kill switches, audit logging, immutable tools, and defence in depth.

Preventing Rogue AI Agents

What happens when the agent itself becomes the threat? Not because of a prompt injection (ASI01) or tool misuse (ASI02), but because the Claude model produces systematically wrong analysis, the Agent Framework has a bug in its tool loop, or the Anthropic API starts returning manipulated responses? Throughout this series, we’ve covered controls that protect the agent from external threats (hijacked goals, misused tools, stolen identities, supply chain poisoning, code execution, context poisoning, cascading failures, and trust exploitation). But what do you do when everything else fails and the agent itself starts behaving in ways you didn’t intend? ...

March 12, 2026 · 25 min · Will Velida
Preventing OWASP ASI04 Agentic Supply Chain Vulnerabilities in a .NET AI agent with SBOMs, dependency pinning, kill switches, and zero-trust architecture.

Preventing Agentic Supply Chain Vulnerabilities

Your AI Agent’s security is only as strong as its weakest dependency. Whatever packages you are using within your agents, you’re trusting that those packages that have been published haven’t been tampered with and that they don’t contain vulnerabilities. The same applies for every transitive dependency in your graph. In Biotrackr, I’m using a couple of packages that are still in preview, so there may be flaky APIs that could affect my agent’s security and reliability. Agentic Supply Chain Vulnerabilities are amplified in agents because AI frameworks are in preview (at time of writing). The technology is evolving rapidly, and these frameworks have deep dependency trees that are harder to audit. ...

March 11, 2026 · 18 min · Will Velida