Suggestions
Sarah Wooders
CS/Math @ MIT
Sarah Wooders is a talented entrepreneur and computer scientist who has made significant strides in the field of artificial intelligence and e-commerce. She was a uFirst Fellow at Underscore VC in 2019, where she founded Allparel, an AI-powered product search tool for clothing.3
Wooders holds a PhD in Computer Science from UC Berkeley, where she was advised by Ion Stoica and Joseph E. Gonzalez, focusing on AI systems.15 Prior to her doctoral studies, she completed her bachelor's degree in Computer Science and Engineering and Mathematics at MIT.1
Her professional experience includes:
- Co-founder and CTO of Letta (August 2024 - Present), a company building a platform for stateful AI agents.16
- CEO and co-founder of Glisten.ai (June 2019 - November 2021), a YC W20 batch startup organizing e-commerce data.15
- Various internships and research positions at companies like SingleStore, Bloomberg LP, and MIT CSAIL.1
Wooders has published several academic papers, including work on MemGPT, Skyplane, and Cloudcast, which have garnered significant citations.4 Her research interests span systems for machine learning, data systems, and cloud computing.5
Recently, Wooders and her co-founder Charles Packer raised $10 million for their AI memory project2, further cementing her position as a rising star in the AI industry.
Highlights
With MemGPT/@Letta_AI - we’ve always tried to focus on simple abstractions that are "bitter lesson pilled" and scale with model improvement. We added memory tools to agents with MemGPT back in 2023, and virtual filesystems to agents in mid-2025 - these abstractions have aged very well by LLM-era standards.
But ironically, just as memory tools and virtual filesystems for agents have started to become mainstream, they have actually become inhibiting to frontier model capabilities. Models today can do far more than call simple memory or filesystem tools: they can write complex scripts and execute them in REPLs or bash environments. There’s been a lot of excitement about RLMs for this exact reason: the most powerful way to manage context is not through tools, but through programs.
Context Repositories are our latest design for agent memory, designed to unleash the full power of agent coding capabilities for memory/context management. An agent’s context is stored as local files inside of git repositories - when agents (or subagents they invoke) modify context, they commit and push their updates which are eventually “recompiled” into their system prompts. This allows agents to programmatically edit memory without their memory being tied to a single machine - and also provides a mechanism for coordination across multiple memory subagents via git-based conflict resolution.
Very excited to see this finally released - it’s pretty crazy how big of a difference it makes to your agent’s memory!
Long context is overrated.
When's the last time you wished you could load an entire book into your memory at once?
Probably never, because you just read the book and remembered the important parts, or looked them up

