Research & Engineering

Building reliable infrastructure for AI agents - from prototype to production.

What I'm building

LLM agents need infrastructure that is both efficient (handling long contexts, optimizing memory, reducing redundancy) and secure (enforcing least-privilege access, resisting adversarial manipulation). I build systems at this intersection - from research prototypes to shipped products. Memory management, context optimization, authorization, and observability.

Focus areas

1

Context Efficiency & Reliability

"How do we make LLM outputs reliable and deterministic through better context management?"

Building systems that clean, deduplicate, and optimize context before it reaches the model. Deterministic algorithms over probabilistic heuristics.

2

Agent Authorization & Audit Trails

"How do we enforce fine-grained, capability-based authorization for AI agents with full auditability?"

Applying Google Zanzibar-style authorization models to agent-tool interactions via OpenFGA and MCP. Dynamic capability tokens, real-time policy enforcement, and audit logging.

3

Adversarial Robustness & Observability

"How can agents maintain safety under prompt injection, tool poisoning, and adversarial tool responses?"

Building observability and tracing infrastructure to detect and mitigate attacks on agent tool-use pipelines. GPU-level profiling for performance and security.

Products & Prototypes

Working with me

I bring a combination of systems engineering experience and research curiosity. Here's what I offer as a collaborator:

  • Shipping: from research prototype to production - built Distill, contribute to OpenFGA as maintainer
  • Systems depth: production infrastructure at Ona (formerly Gitpod)
  • Open source: OpenFGA maintainer (CNCF), GitHub1s maintainer
  • Cross-domain: security, distributed systems, ML infrastructure
  • Communication: technical writing on Dev.to and Medium

Let's talk