Portfolio
01

Core Research Questions

  • How should federal agencies track and govern AI-related spending?
  • What institutional levers (OMB guidance, grantmaking, policy memos) can surface hidden risks?
  • How can iterative public artifacts (blogs -> posters -> papers -> memos) accelerate impact?
02

How can grantmaking data be used to surface and govern AI-related federal spending?

Poster and research stream on using grantmaking data to surface and govern AI-related federal spending. The work germinated at CDT after limited traction on the OMB memo guidance; I pushed the idea through iterative public artifacts to prove value and build momentum.

The phrase "AI-related discretionary spending" appears to have no search results before I used it publicly; it later shows up in a federal AI action plan (cannot prove causality, but the timing is notable).
03

How can we capture sincere feedback on AI from government employees?

Exploring channels that invite candid, actionable feedback on production AI tools inside agencies; prototypes move from informal backchannels to lightweight "rulewriter" interaction patterns.

Trajectory

Idea: create a safe backchannel for candid AI feedback
Next: pilot inside an agency
04

How can agencies quantify transparency debt in AI systems?

Early framework to score and track accumulating "transparency debt" across deployed AI systems - balancing usability, governance, and auditability.

Trajectory

Idea: define and quantify "transparency debt"
Next: publish a validator and dashboard
05

How to design practical oversight interfaces for grant officers?

Interface sketches and lightweight prototypes that let program officers spot AI-related risks and opportunities without leaving their workflow.

Trajectory

Observation: oversight work is fragmented across tools
Next: experiment with embedded provenance widgets