Research & Engineering

AI agent infrastructure, from prototype to production.

What I'm building

LLM agents need infrastructure that is efficient (long contexts, memory optimization, redundancy reduction) and secure (least-privilege access, adversarial resistance). I build at this intersection: context optimization, authorization, memory management, and observability.

Currently exploring

  • Agent Security - Shipping agentic-authz: OpenFGA + MCP authorization for AI agents
  • Context Engineering - Distill 1.0 with MCP integration and cloud waitlist
  • Open Source - OpenFGA contributions, agent infrastructure tools
  • Speaking - Conferences on AI agent security and authorization

Focus areas

1

Context Efficiency & Reliability

"How do we make LLM outputs reliable and deterministic through better context management?"

Clean, deduplicate, and optimize context before it reaches the model. Deterministic algorithms over probabilistic heuristics.

2

Agent Authorization & Audit Trails

"How do we enforce fine-grained, capability-based authorization for AI agents with full auditability?"

Google Zanzibar-style authorization for agent-tool interactions via OpenFGA and MCP. Dynamic capability tokens, real-time policy enforcement, audit logging.

3

Adversarial Robustness & Observability

"How can agents maintain safety under prompt injection, tool poisoning, and adversarial tool responses?"

Observability and tracing to detect attacks on agent tool-use pipelines. GPU-level profiling for performance and security.

Products & Prototypes

Working with me

Systems engineering experience meets research curiosity.

  • Ships: research prototype to production. Built Distill, maintain OpenFGA
  • Systems depth: production infrastructure at Ona (formerly Gitpod)
  • Open source: OpenFGA maintainer (CNCF), GitHub1s maintainer
  • Cross-domain: security, distributed systems, ML infrastructure
  • 40+ articles on siddhantkhare.com/writing

Let's talk