A quiet movement is forming: entrepreneurs building for human agency — not just agents’ sake.
Read our invitation to join the discussion around building truly agentic tech: products that preserve human agency.
Credit: Autonomy Cube by Trevor Paglen and Jacob Appelbaum. Installed in galleries and civic spaces, this sculpture provides not just symbolism but function: an open Wi-Fi network that routes traffic through Tor. Viewers become participants in a privacy-preserving mesh. Like agentic tech, it reimagines consent, ownership, and transparency as embedded system design—not abstraction.
Thanks to Zoe Weinberg and Vivian Chong for their extensive contributions!
Artificial intelligence creates unprecedented opportunity for individual self-expression, personalization, and productivity—a form of enhanced sovereignty, or agency, that Mill could never have anticipated. But unless we start prioritizing human agency over AI agents, we risk building a panopticon that may irrevocably undermine digital liberties and freedom.
In 2025, a powerful new current is emerging in technology—one that places humans, not algorithms, at the center of the digital experience. While much of the tech world races toward a future of AI agents working on our behalf, a growing community of founders, engineers, and advocates is asking a more fundamental question: How do we ensure technology enhances human agency rather than diminishes it?
We invite you to explore this burgeoning field with us as we chronicle the entrepreneurs building the next generation of agentic tech—products that expand human agency rather than merely deploying automated agents.
The human agency vs. AI agent distinction — and why the former should inform the latter
Lately, when technologists talk about agency, they often confuse human agency with AI agency. When AI engineers say agent today, they usually mean a loop of code fine‑tuned to pursue a goal with minimal human supervision. Yet for those of us who cut our teeth in privacy tech, agentic has never been a property of software. It is a property of people—the stubborn, creative, contradictory capacity to decide for ourselves. The term agentic has its roots in psychology: to lead an agentic life is to be a high-agency person, with a high degree of self-determinism and empowerment.
This semantic shift mirrors what happened with "autonomy"—a concept that once referred exclusively to human self-determination but now commonly describes systems that operate independently of human control. In both cases, terms that originated in human capability have been repurposed to describe the very technologies that might limit that capability.
Conflating these two meanings could have disastrous consequences: If the hype cycle continues unexamined, we will wake up in a world where machine agency is celebrated while human agency is treated as technical debt: an inconvenience to be abstracted away by design.
This context substantiates a broader movement that we’re calling Agentic Tech.
Data is your digital body
Digital authoritarianism and surveillance capitalism are siblings. One serves state interests, the other corporate profits. Both feed on surplus data and both diminish individual judgment. We believe in profitable business models, but not at the expense of human agency. And we’re committed to building a world where technologists and users don’t have to decide between the two.
What's at stake isn't just where your data is stored, but who has access to your mind. Privacy is a means to an end, not an end itself. The problem with surveillance capitalism isn't simply data collection—it's the behavioral modification that follows.
The missing ingredient is cognitive consent: the human capacity to interrogate, override, or simply say no.
Consent is sexy. In digital spaces, it means having agency over your information environment, awareness about what you're being asked to do, and control over how you respond.
The two commitments of agentic tech
Over the past three years—predating AI tools like ChatGPT—we have convened engineers, cryptographers, policy wonks, founders, and artists who share this preoccupation. Across those conversations, two design commitments surface again and again:
Transparency: It’s required in order to obtain informed consent. Systems must disclose—at transaction time—what signal they're collecting, how inferences are drawn, and who profits. This means nutritional labels for data flows, not fourteen-page End User License Agreements (EULAs). Transparency encompasses open-source software, verifiable processes, and explainable systems that allow you to understand what's happening with your data and why.
Ownership: Not just of identity, but of all your digital presence—your data, credentials, assets, and creative work. This means the freedom to move between platforms without starting over, the ability to grant or revoke access to your information, and the agency to determine how your digital footprint is used.
If a product advances these commitments, we count it as agentic—whatever stack it runs on. If it erodes them, no amount of synthetic autonomy can redeem it.
An invitation to build together
Fortunately, a new generation of technologists sees the opportunity to leverage technology to advance human agency rather than erode it.
Some are building local-first applications where your data stays on your device. Others are creating decentralized networks that resist censorship and control. More are developing privacy-enhancing technologies that shield personal information from exploitation. What unites these diverse efforts is a shared belief that technology should empower rather than extract.
This note reflects our guiding principles and open door. Over the coming months we'll share profiles of agentic tech builders, observations from the venture capital ecosystem, and historical case studies that inform this movement.
We know buzzwords decay. Movements endure only if they stay relevant, self-aware, and open. So treat this newsletter as a living document. Debate with us. Send footnotes. Propose panels, policies, pull-requests. The stakes—civic, economic, existential—are too high for spectators.
We're building a community dedicated to ensuring that technology serves human flourishing, not the other way around. Join us in creating a technological future that amplifies our uniquely human capacity for choice, creativity, and connection. Subscribe today to our Substack (below) and event series on Luma.





this is great — i've been writing about building technology that is built as a tool for use by humans, and embeds clear and explicit thinking about the human-tool relationship. currently working on an AI tool to promote more robust critical thinking and meaningmaking when using AI tools. would love your thoughts on this: https://vaughntan.org/meaningmakingai and especially this: https://vaughntan.org/aiux
will be in the bay area in mid-october for the FLF demo day — are either of you around then?