Preventing OWASP ASI03 Identity and Privilege Abuse in a .NET AI agent with Entra Agent ID, RBAC, federated credentials, and per-action authorization.

Preventing Identity and Privilege Abuse in AI Agents

One of the challenges I faced developing an agent for my side project (Biotrackr) was how do I manage identity. Some AI Agents share the same service principals or managed identity with the application, which is used to authenticate API calls, access databases etc. This is an issue, because if the application has contributor access to a database, so does the agent. If the agent gets compromised, then the blast radius extends to the entire application’s permission scope. ...

March 11, 2026 · 16 min · Will Velida
Preventing OWASP ASI02 Tool Misuse in a .NET AI agent with date range limits, page size caps, read-only tools, egress controls, and managed identity.

Preventing Tool Misuse in AI Agents

In my side project (Biotrackr), I have a chat agent that I use to query my data using natural language. This agent has 12 tools that call APIs to retrieve data that provides context to a LLM. I’m using Claude as my LLM provider, so Claude will decide which tool to call, and with what parameters. Let’s pretend that we are bad actors trying to disrupt my agent. Say we decide to prompt inject the agent, and get it to perform an expensive query to retrieve 100 years of data (I’m not that old thankfully!) in an attempt to return a massive payload, consume thousands of Claude API tokens, and hammer my APIM gateway. ...

March 11, 2026 · 14 min · Will Velida
Preventing OWASP ASI05 Unexpected Code Execution in a .NET AI agent with input validation, non-root containers, static tool registration, and runtime monitoring.

Preventing Unexpected Code Execution in AI Agents

Can your AI Agent run code? If not, you probably don’t think that unexpected code execution applies to you. However, this goes a lot deeper than eval(). Input validation, container security, static analysis, and runtime monitoring all play a part here. Even an Agent with read-only capabilities and no code interpreter has an execution environment, tool parameters that flow from LLM output, and a CI/CD pipeline that needs to be secure. ...

March 11, 2026 · 16 min · Will Velida
Building a health data chat agent using Claude as the LLM backend with the Microsoft Agent Framework in .NET 10, featuring function tools, AG-UI streaming, and system prompt design.

Building a Health Data Chat Agent with Claude and the Microsoft Agent Framework

Using the Microsoft Agent Framework, we can build agents that interact with our data via chat capabilities. In my personal project, I decided to create a Chat API that allows me to query my data via a chat interface using an LLM. I wasn’t keen on using OpenAI, or even provisioning Microsoft Foundry to create a deployment so that I could use an LLM that they provide. I decided to just grab an API key for Anthropic so that I could use Claude, and hook it up into my agent so I wouldn’t have to worry about managing any Foundry infrastructure. ...

March 10, 2026 · 14 min · Will Velida
Learn how to give AI agents their own discrete, auditable identities using Microsoft Entra Agent ID, enabling them to authenticate to Azure services like Cosmos DB and Blob Storage with scoped RBAC permissions via the .NET Azure SDK.

How to Call Azure Services from an AI Agent Using Entra Agent ID and the .NET Azure SDK

Introduction: The Identity Problem with AI Agents AI agents are moving beyond simple prompt-and-response. They’re calling APIs, reading databases, writing to storage etc. Doing actions on real resources with real consequences. This raises a question every platform team eventually asks: whose identity should the agent use? Today, most agents authenticate to Azure services one of two ways: Delegated (on-behalf-of-user): The agent acts as the signed-in user. This can work for interactive scenarios, but it means the agent inherits all of the user’s permissions. Which far more than a narrowly-scoped tool call should need. It also falls apart for background or autonomous agents that run without a user session. App-only (managed identity or client credentials): The agent authenticates as the hosting application. This solves the “no user present” problem, but now every agent running on the same compute shares a single identity. You can’t distinguish which agent accessed which resource in your logs. You can’t give one agent read-only access to Cosmos DB while another gets read-write. The agent is the app, as far as Azure is concerned. Neither option gives you what you actually want: a discrete, auditable identity for the agent itself. One that’s separate from the user, separate from the hosting infrastructure, and scoped to exactly the permissions the agent needs. ...

March 2, 2026 · 28 min · Will Velida